Phys 131, Fall 2012
Lab 1(Part 2)—Introduction to ImageJ and Analysis of Cell Motion
This is the second week of a two-week lab studying cell motion. Last week we learned how to use Excel to analyze the 1-D motion of an amoeba. This week we will be learning how to use ImageJ to analyze videos of cell motion. The Scenario: A patient has a wound, in the process of healing, that is infected with bacteria. Will the patient need antibiotics? To explore this scenario, you will be analyzing videos of: 1) wound healing, 2) neutrophil motion, and 3) bacteria motion. Clearly, the relative speeds of the wound healing, the neutrophils, and bacteria will affect your decision. Thus it becomes important that we learn how to quantify the motion of cells and to analyze videos.
Your lab group has been provided with six video files—a long and a shorter version of each of the three processes, wound healing, neutrophil motion, and bacteria motion. Each video is a sequence of images called ‘frames.’ Taken together, each video is an ‘image sequence’ or ‘stack.’ The wound healing videos, ‘WoundHealing,’ show breast tissue cell sheet migration. The ‘Neutrophils’ videos show white blood cells responding to six different concentrations of fMLP—the chemical indicator of bacteria. The bacteria videos show E. coli motion. By viewing the longer video files, you can begin to examine the qualitative aspects of our scenario. These videos are rich in detail but files contain too much data to be analyzed in our limited lab time. From the shorter videos, your task is to perform a quantitative analysis, with ImageJ and Excel, of the rates of motion of these cells. This quantitative analysis should help you problem-solve within this scenario. Today you will practice and master the skills necessary to analyze motion using ImageJ. After today, you will ALL be expected to be experts at these skills so take turns and help each other learn. Take notes for the future if you are worried that you will forget.
At the end of the lab today, your group will submit one lab report. This will be reviewed by the TA according to the Scientific Community Lab rubric. Good attention to detail now will save you time later! Remember, your TA is here to help you with equipment and ImageJ, but the physics is up to you and your group!
Video Files:
Approximate Timing:(2 hours)
-
Introduction: .…………………………………………………… 10 minutes
-
Data Collection & Analysis, 1st video: ……………………………. 30 minutes
-
Class Discussion of 1st video: ……………………………………. 5 minutes
-
Data Collection & Analysis, 2nd video: ……………………………. 30 minutes
-
Conferring with companion group, comparing all three videos: 10 minutes
-
Class Discussion/Summation: ……………………………………. 10 minutes
-
Finalizing Lab Report: ……………………………………………. 25 minutes
Phys 131, Fall 2012—Lab 1 (Part 2) (ImageJ versions 1.45/1.46)
Technical Intro to ImageJ
Open the ImageJ software by double-clicking on the desktop icon. The menu bar will appear (see below).
* Opening an ‘.avi’ Video File:
To open a video file in ImageJ, click on the ‘File’ menu, then choose ‘Import,’ then choose ‘AVI…’. Select the video you wish to import then click ‘Okay.’ Click ‘Okay’ in the ‘AVI Reader’ window that opens. (For some videos, you may find it helpful to use the other available options in the AVI Reader window, but for today’s videos you do not need them.) Once the video file is opened in ImageJ, you should be able to watch the video on a loop by clicking the play button at the bottom of the video window. You can also advance through the ‘stack’ of images in the video (‘image sequence’), ‘frame’ by ‘frame,’ by clicking on the right and left arrows surrounding the scroll bar across the bottom of the video window. Information about which frame of the stack you are viewing is located at the top of the window. Explore these options.
In viewing this video, you will need to decide to which objects/points on the object you will pay attention. When you collect position data, what will you ‘track’? Here are some questions you might consider:
(Opening an ‘.avi’ Video File, cont.)
-
How many objects/points do we need to track?
-
What part of the object are we tracking?
-
How do we pick a good candidate for tracking?
-
What makes a candidate good/bad?
-
What challenges exist for tracking the object?
-
When do you start following an object and when do you stop?
* Gathering Data with the Manual Tracking Plugin:
Once you have decided what objects/points to consider in collecting data, you can open the Manual Tracking plugin (added java code already installed on your computer). The Manual Tracking plugin allows you to collect horizontal (X) and vertical (Y) position coordinates for objects at different times within the video by clicking on the object—you can ‘track’ an object. To activate the Manual Tracking plugin, click on the ‘Plugins’ menu, select ‘Stacks,’ and choose ‘Manual Tracking.’ A new ‘Tracking’ window will appear. Ignore the parameters at the bottom of the window—we will simply use the X and Y pixel locations generated by manual tracking.
Make sure that the video is displaying the frame with which you wish to begin and then click ‘Add track’ in the Tracking window. The parameters portion of the Tracking window will disappear. Move your cursor over the object you wish to track (you may only track one object at a time) in the video window, select your location, and click on the object. Two new windows will appear; the data window we want is titled: “Results from (your file name) in (your units of speed)”. The first line of this window will show a -1 for Distance and Velocity, since these quantities cannot be calculated from a single data point. The video window will have automatically advanced one frame. To get the next data point, simply re-center the cursor on the object you are tracking and click again. The data will be logged and the stack will advance one frame. Repeat this process until the stack is finished or until you choose to stop (if the stack is not finished when you choose to stop, click ‘End track’ in the Tracking window). To track a new object, reset the video window to the frame with which you wish to begin and then click ‘Add track’ in the Tracking window. If you make a mistake, an errant click, etc., you can use the ‘Delete last point’ or ‘Delete track no.’ buttons to remove the data. If necessary, you can start over by clicking the ‘Delete all tracks’ option.
When you have collected all the data you wish to collect, you can review your work by clicking the ‘Overlay Dots & Lines’ in the Tracking window. A new window will open. Click play in this window to see your work.
In some cases, it may be easier to track the motion of an object/point if you zoom in/out on the video. To do this you can either 1) use the magnifying glass icon with ‘+’ and ‘-‘ keys to select an area for zooming or you can 2) click on the ‘Image’ menu, then choose ‘Zoom,’ and use ‘In’, ‘Out’ or ‘To Selection’ according to your needs.
* Understanding the ‘Centring’ Option in Manual Tracking:
The Manual Tracking plugin has a ‘centring’ (British English spelling of ‘centering’) option that can use the brightness/darkness of surrounding pixels within a frame to adjust your tracking ‘click’ to the most extreme (brightest/darkest) pixel in a search square around your click. This can be useful for certain types of videos where a uniform/constant part of the object you are tracking is either very bright (local maximum) or very dark (local minimum). If you wish to use the ‘Centring Correction’, tick the box in the Tracking window to activate this option. The ‘Centring Correction’ must be re-ticked for each new object tracked. You may need to adjust the Tracking window parameter ‘Search square size for centring’ to get a properly sized square for searching around your ‘click.’ To do this, tick ‘Show parameters?’ in the Tracking window and enter an appropriate pixel size for your searching square based on the dimensions of the object that you are tracking.
* Exporting the Manual Tracking Data to Excel:
When the data collection is finished, go to your data window (the data window we want is titled: “Results from (your file name) in (your units of speed)”) and use the menus either to:
-
Option 1: ‘Edit’ → ‘Select All’ → ‘Ctrl’ + ‘C’ to Copy → Paste into an Excel file (you must add the column titles yourself), or
-
Option 2: ‘File’ → ‘Save As…’ → Name the file what you will and click ‘Save.’ (It will save in Excel format with column titles.)
* Challenges of Interpreting the Data:
The data collected (see above) by the Manual Tracking plugin includes: data point number (which mouse click is this?), Track number (which object is this?), Slice number (which frame of the video is this?), X pixel coordinate, Y pixel coordinate, Distance and Velocity (technically, a speed). Manual Tracking also records the Pixel Value, a measure of the brightness of the pixel selected when the object is clicked upon. These last three columns, Distance, Velocity, and Pixel Value, are useless to us; Distance and Velocity have not been scaled correctly and Pixel Value gives us no motion information. The first five columns are what we will use in our motion analysis.
Interpreting this data presents several unique challenges that your group must address:
-
How do we determine the correct time for a data point?
-
How do we convert the X and Y pixel coordinates into x- and y-positions?
-
Do we need to consider both the x and y motions? Why?/Why not?
Comments (0)
You don't have permission to comment on this page.