Getting started#

Command line interface#

You can analyse a dataset used the command line interface as follows

python -m mps_motion analyze PointH4A_ChannelBF_VC_Seq0018.nd2

of equivalently

mps-motion analyze PointH4A_ChannelBF_VC_Seq0018.nd2

Here PointH4A_ChannelBF_VC_Seq0018.nd2 is an example file containing cardiac cell data. Note that in order to read these data you need to also install the mps package. To see all available options for the cli you can do

python -m mps_motion analyze --help

See Command line interface for more info.

Computing displacement, velocity and strain#

First we need to import the required libraries

from pathlib import Path
import matplotlib.pyplot as plt # For plotting
import mps # Package to load data
import mps_motion # Package for motion analysis
import logging

# Set loglevel to WARNING to not spill the output
mps_motion.set_log_level(logging.WARNING)

Next, let us download to sample data. There is one available dataset here, but you should swap out the paths here to the path to your own data.

path = Path("data.npy")
if not path.is_file():
    mps_motion.utils.download_demo_data(path)

Now we will read this file using the cardiac-mps package and print some info about the file

# Load raw data from Nikon images
data = mps.MPS(path)
print(data.info)
{'num_frames': 267, 'dt': 11.926342039179985, 'time_unit': 'ms', 'um_per_pixel': 0.65, 'size_x': 1022, 'size_y': 587}

Now, we will create an optical flow object which is the object we use to run the motion tracking software. Here we have chosen the Farneback optical flow algorithm.

opt_flow = mps_motion.OpticalFlow(data, flow_algorithm="farneback")

To list available optical flow algorithms you can use

mps_motion.list_optical_flow_algorithm()
['farneback', 'dualtvl1', 'lucas_kanade', 'block_matching']

Before we can run the motion analysis we need to estimate a suitable reference frame. We can do this by first estimate the velocity (let us use a spacing of 5)

v = opt_flow.get_velocities(spacing=5)
print(v)
VectorFrameSequence((1022, 587, 262, 2), dx=0.65, scale=1.0)

Let us compute the norm and use an algorithm for estimated the reference frame. This algorithm will use the the zero velocity baseline a find a frame where the velocity is zero. We must also provide the time stamps with the same length as the velocity trace

v_norm = v.norm().mean().compute()
reference_frame_index = (
    mps_motion.motion_tracking.estimate_referece_image_from_velocity(
        t=data.time_stamps[:-5],
        v=v_norm,
    )
)
reference_frame = data.time_stamps[reference_frame_index]
print(f"Found reference frame at index {reference_frame_index} and time {reference_frame:.2f}")
Found reference frame at index 130 and time 1300.99

Let us also plot the velocity trace and mark the point where the reference frame is chosen

fig, ax = plt.subplots()
ax.plot(data.time_stamps[:-5], v_norm)
ax.plot([reference_frame], [v_norm[reference_frame_index]], "ro")
plt.show()
_images/713dcb6f89994c92115008381f75319ad7546dfd0fed3da850559f7c5b5d320a.png

We can now run the optical flow algorithm to extract the displacements. We will first perform a downsampling of the data to a scalar of 0.4 (i.e 40% of the original size) to make the problem less computational expensive

u = opt_flow.get_displacements(reference_frame=reference_frame, scale=0.6)
print(u)
VectorFrameSequence((613, 352, 267, 2), dx=1.0833333333333335, scale=0.6)

We see that the object we get back is a VectorFrameSequence. This is a special object that represents a vector field for each image in the sequence of images, and we see that is has dimension (number of pixels in x \(\times\) number of pixels in x \(\times\) number of time steps \(\times\) 2) where the final two dimensions are the \(x-\) and \(y-\) component of the vectors. If we take the norm of this VectorFrameSequence we get a FrameSequence

u_norm = u.norm()
print(u_norm) # FrameSequence((817, 469, 267), dx=0.8125, scale=0.4)
FrameSequence((613, 352, 267), dx=1.0833333333333335, scale=0.6)

Note that the norm here represents the norm at each pixel in the stack of images. To get a proper trace we can for example compute the mean across all the pixels. Note that the arrays we work with here are lazy evaluated (using dask arrays) so we need to also call the .compute method to get some actual results.

# Compute mean of the norm of all pixels
u_norm_mean = u_norm.mean().compute()

Let us also plot the trace

fig, ax = plt.subplots()
ax.plot(data.time_stamps, u_norm_mean)
ax.set_title("Displacement norm")
plt.show()
_images/f483762944b6bda47218c79ca4bdb5902773f2f73a4ca651dd6bd16c61c556ff.png

We can also create a movie with the displacment vectors. Since we now have scaled the displacement we would also need to send in a scale data set for the background images

scaled_data = mps_motion.scaling.resize_data(data, scale=u.scale)
movie_path = Path("motion.mp4")
movie_path.unlink(missing_ok=True)
mps_motion.visu.quiver_video(scaled_data, u, movie_path, step=12, vector_scale=4)
Create quiver video at motion.mp4:   0%|          | 0/267 [00:00<?, ?it/s]
Create quiver video at motion.mp4:   8%|▊         | 21/267 [00:00<00:01, 202.68it/s]
Create quiver video at motion.mp4:  16%|█▌        | 43/267 [00:00<00:01, 208.49it/s]
Create quiver video at motion.mp4:  24%|██▍       | 64/267 [00:00<00:01, 198.60it/s]
Create quiver video at motion.mp4:  31%|███▏      | 84/267 [00:00<00:00, 196.67it/s]
Create quiver video at motion.mp4:  39%|███▉      | 105/267 [00:00<00:00, 198.15it/s]
Create quiver video at motion.mp4:  48%|████▊     | 127/267 [00:00<00:00, 202.55it/s]
Create quiver video at motion.mp4:  56%|█████▌    | 149/267 [00:00<00:00, 206.36it/s]
Create quiver video at motion.mp4:  64%|██████▎   | 170/267 [00:00<00:00, 202.00it/s]
Create quiver video at motion.mp4:  72%|███████▏  | 191/267 [00:00<00:00, 198.98it/s]
Create quiver video at motion.mp4:  79%|███████▉  | 212/267 [00:01<00:00, 200.14it/s]
Create quiver video at motion.mp4:  88%|████████▊ | 234/267 [00:01<00:00, 203.45it/s]
Create quiver video at motion.mp4:  96%|█████████▌| 255/267 [00:01<00:00, 197.99it/s]
Create quiver video at motion.mp4: 100%|██████████| 267/267 [00:01<00:00, 200.44it/s]

2024-02-11 18:17:35,735 [1948] WARNING  imageio_ffmpeg: IMAGEIO FFMPEG_WRITER WARNING: input image is not divisible by macro_block_size=16, resizing from (612, 352) to (624, 352) to ensure video compatibility with most codecs and players. To prevent resizing, make your input image divisible by the macro_block_size or set the macro_block_size to 1 (risking incompatibility).
[swscaler @ 0x65ca740] Warning: data is not aligned! This can lead to a speed loss

Now we can load the movie

from IPython.display import Video
Video(movie_path)

From the displacement we can also compute several other mechanics features, such as the velocity and strain. This is most conveniently handled by first creating a Mechanics object

mech = mps_motion.Mechanics(u=u, t=data.time_stamps)

For example we can compute the velocity using a spacing of 5

spacing = 5
v = mech.velocity(spacing=spacing)
v_norm_mean = v.norm().mean()
fig, ax = plt.subplots()
ax.plot(data.time_stamps[:-spacing], v_norm_mean)
ax.set_title("Velocity")
plt.show()
_images/4a235698d78e0ac9365ebd7a1dc4ec7aaa41b67ccdaf666e51afbdccb8236ee3.png

Note that this is different for the velocity trace we computed in the beginning, since here we are compting the velocity from the displacemens (which is also down-scaled) while in the beginning we where computing the velocity directly using the optical flow.

We can also compute the mean value of the \(x\)-component of the Green-Lagrange strain tensor

# Green-Lagrange strain
print(mech.E) # TensorFrameSequence((817, 469, 267, 2, 2), dx=0.8125, scale=0.4)

# Plot the X-component of the Green-Lagrange strain
fig, ax = plt.subplots()
ax.plot(data.time_stamps, mech.E.x.mean().compute())
ax.set_title("$E_{xx}$")
plt.show()
TensorFrameSequence((613, 352, 267, 2, 2), dx=1.0833333333333335, scale=0.6)
_images/7800b587a4f53c580c12c01a539a867456ef6f2c2dfad1ed3feb2f966be45bd7.png