Several straight walks in a row have been recorded using the {{XSENS}} for motion data
and a digital video camera for a visual analysis of single steps. The goal was to produce several annotated
records of a walking human. The focus in this experiment was the step size on a planar ground.
Materials and Methods
As already described the {{XSENS}} was used to record the 3D motion data.
Furthermore a digital video camera was be used to make a video record of the corridor walk.
For video/image analysis folding rules have been fixed to the floor as measuring leash. Additionally one
straw has been fixed on the walking persons soles. Pointing towards the measuring leash between
the persons legs the straws should make the video analysis of the step size more accurate.
Several approaches to gain video/image data have been tried. First the video camera was fixed on
a flat little cart and slided after the person wearing the {{XSENS}} while it recorded the persons
steps as it is shown in the video of Walk03 below.
As an alternative the person wearing the {{XSENS}} could hold the video camera as shown
in the video of Walk05 further below.
In an other alternative approach images could be taken with a digital camera for every step as shown
in the video of Walk07 further below.
The internal time of the electronic recording systems must be synchronised!
The video data as shown in the video of Walk03 was nearly impossible to read.
Accordingly the video data from Walk05 was still very unsatisfying. The analysis
of the images as presented in Walk06 and Walk07 was better,
but the gear of the person wearing the {{XSENS}} was quite unnatural.
Declarations
Step Number
starting point, both feed side by side.
Side
left or right foot earlier
Time
time of reading
Time [s]
relative time of reading
Position Straw (Front)
Position of straw (front of straw) on the measuring leash
Bar
Bar the straw is pointed on (Bar 1 = first bar after starting point),
2r means the step was measured on the reverse direction on the second bar.
Position relative
The measuring leashes were fixed next to one another with the scale in reverse order:
Bar 1 - 2m…0m, Bar 2 - 2m…0m, The corrected position equals a 4m
measuring leash starting with 0 on the rear foot in the starting position.
Due to poor light conditions the video and image data was hard to read. Additionally the time used by
the {{XSENS}} showed some differences from the time of the running OS that was used to synchronize with
the video cameras. Therefore the annotation of the mvnx data files was not quite as simple and accurate was
we would have expected.
As already described on the page about the definition of step lengths
the computation of the step lengths depends on its definition. In this approach a combination of the velocity
of both feet (one moving, one standing) and the distance between them was used to extract the timeframes of the steps.
As shown in the 3D plots this works quite well.
However, the computed step lengths are on average about 7cm smaller than the measured ones. This could have several reasons, as listed below.
Possible sources of error
Time of electronic recording systems is not fully synchronised. (cm)
Different frame rates (50 fps for digital video camera, 120 fps for {{XSENS}}). (cm)
Folding rules are not straight aligned. (cm)
Distance is not read precisely in video/image analysis. (cm)
Distorting from video camera lens. (cm)
Electrical disturbances from WiFi, Handy, … for 3D motion data.
Gear is unnatural because of experiment design.
Gear is unnatural because of wearing the tight {{XSENS}}.
Gear is unnnatural because of stopping for taking photos.
Position shift during recording with the {{XSENS}} as described
in the MVN User Manual on page 41 (MV0391P, Revision B, from May 26th 2010). (cm)
Step size is only measured in walking direction, ignoring deviations to the left and right (see figure below).
Wrong interpretation of Contact Points in {{XSENS}}
as described in the MVN User Manual on page 51 (MV0391P, Revision B, from May 26th 2010). (cm)
Errors while calibrating {{XSENS}}.
Error Resolution
As shown in the scatterplots of the comparison of measured and computed step lengths
the error is rather proportional to the size of the step length. This suggests that there was
an undetected failure while calibrating the system. As described in the MVN User Manual on page 97 (MV0391P, Revision B, from May 26th 2010)
the position data is calculated based on the acceleration and a human body model created while calibrating.
To encourage this point the length of the whole walk was analysed based on timestamps. To put it another way the length
between the starting point and the end point of the walks was computed based on timestamps extracted from the video and image analyses.
As shown in the table below the distances in the virtual world are shorter than in reality.
{{table::tables/experiment1/distance.csv}}
This table shows the computed virtual world distance from the starting point to the endpoint of each walk.
An adjustment value is given in real world distance to align to the 4m measuring leash. In the last column
the difference to the real world data is given: 4m - adjustment - virualWorldDistance is given.
Accordingly the computed step lengths have been extended: the median of all the ratios of the computed and the measured step
lengths has been used, as shown in the table below. This was calculated separately for each side.
This table shows the adjusted computed step lengths together with the data presented in the results part.
Therefore the step lengths have been divided by the foot specific median of the ratio Step Length Comp[cm]/ Step Length Exp[cm].
As shown in the last column the difference between the computed and measured step length is (with exceptions) in the
expected magnitude (smaller 3cm).
The difference between the computed and measured step lengths now is as expected small.
Only one value is bigger than 3cm, but this probably is based on a wrong extrapolation
in the video analysis from the step before that was removed from this table (the camera missed
the relevant step when it hit the ground, so the position was extrapolated).
Conclusion
Based on the experience of this first experiment it would be recommended to start a second one,
but with:
Better Light Conditions.
Confirmation of correct human body model after calibration.
Additional digital time measurement to generate time markers for observed events.