There was an App…

It works, but there are a few caveats:

1. It is a prototype, and may not work well on all platforms. It is designed for the Nexus 4 phone and to work on Chrome, Firefox and Safari. Other than that, it may not behave.
2. It is not secure. Don’t use the same password you use for your bank account.
3. It is resource-intensive. If you upload a 200km ride, it will take a long time to process, and rides process one at a time.
4. It only works in Victoria BC.
5. The kinetic energy and potential energy terms have been removed — they don’t work with the noise in the GPS track.

Enjoy!

## Moment of Inertia

The moment of inertia (I, in the power model) of a body rotating on an axis describes the amount of torque required to accelerate its rotation. It is calculated by taking the sum of the mass times the square of the radius of every particle in the body, or: $I_{p} = \sum\limits_{i=1}^n m_{i}r^2_{i}$.

To save a bit of effort, we can assume that a bicycle wheel is more like a pendulum, in that almost all of the wheel’s rotating mass (minus the spokes and rotating hub shell), is at its circumference, in the form of the rim and tire. So the simpler formula, $I=mr^2$,

will suffice for now.

Via an informal survey of the road wheel offerings on chainreactioncycles.com, I can make a reasonable estimate of the masses of the components of the wheel:

1. tire: 200-300g;
2. tube: ~100g;
3. rim (the mass of the wheel, assuming the portion contained in spokes and hub shell is insignificant; not a great assumption, but it’ll do for now): 800g;
4. air in the tire: 8.616906g

The value for 4 is calculated by estimating the volume of the tube as a torus of 330mm major radius (measured from the axle to the centre of the tube), and 11.5mm minor radius, $V = {(\pi r^2) (2 \pi R)} = {(\pi 0.0115^2) (2 \pi 0.330)} ={0.000861468420149m^3}$,

and calculating the density of air in the tube (120psi, 15C and 80% humidity). $\rho = {{P_{d}} \over {R_{d} T}} + {{P_{v}}\over{R_{v} T}} =11.2161842874 \frac{kg}{m^3}$,

Here, $P_{d}$ is the partial pressure of dry air, $R_{d}$ is the specific gas constant for dry air, $R_{v}$ is the specific gas constant for water vapour, $P_{v}$ is the pressure of water vapour and $T$ is the temperature in Kelvin. The pressure is measured with respect to atmospheric pressure (gauge) pressure, unlike atmospheric pressure, which is absolute. So the absolute pressure in the tire is, $p = {827370.875+100900.0}$,

where the first term is 120psi in Pa, and the second is the current atmospheric pressure in Pa.

Next, determine the mass of air: $m = {11.2161842874*0.000861468420149}={0.009662389} kg$.

We have to remember to subtract the mass of air that would have occupied the same space at atmospheric pressure, so: $m = {0.009662389-1.21360620562*0.000861468420149} = {0.008616906kg}$.

I weighed a wheel with and without 120psi of air in the tire. The difference is 8g on this very imprecise scale, close to the calculated value. This tiny value is well within the uncertainty of the masses of other wheel components, but I’ll throw it in for now.

So, using 250g for the tire,  100g for the tube, 800g for the wheel and 9g for the air in the tire, the mass of the pendulum is 1159g. The standard radius of the rim is 311mm. Then, the moment of inertia is, $I={1.159*0.311^2}={0.112099639}$.

This is slightly larger than Martin et al., at $0.14kgm^2$, but maybe my wheels are just better! (More likely, they used a better method for measuring it.)

Now I can estimate the moments of inertia for different wheel/tire sizes.

## Spoke Drag and Rolling Resistance

Unfortunately there’s no easy way to estimate spoke drag, so I’ll have to use the documented constant (0.0044).

Martin et al. estimated the coefficient of rolling resistance by averaging reported values for ten tires. I’ll be using the same value (0.0032).

## The Power Function

It’s time to get to the heart of this application.

Martin, Milliken and Cobb (1998) documented an amazingly comprehensive cycling power model using a wind tunnel and SRM power meter for validation. Their model considers tangent winds, wheel bearing and wind drag, power loss through changes in kinetic energy, gradient, etc. The model makes some assumptions that may not be realistic for a general audience. For example, all of the subjects were young men, experienced cyclists, riding racing bikes. The functions derived for estimating the riders’ frontal area are derived from a relatively small, uniform sample of the cycling population, the wheel bearing drag is derived from very high-quality equipment in good trim, and the spoke drag is estimated from time trial wheels, which are much more slippery than standard road wheels.

However, the model is a great starting point, and matches the power readings of the SRM convincingly. If functions or constants for some of these coefficients are available, I can add those refinements later. For example, it would be nice to have a frontal-area model that considers gender, and is not based only on whippet-thin young men.

The power function follows: $P_{tot} = [V_{a}^2 V_{g} 1/2 \rho (C_{d} A + F_{w}) + V_{g} C_{rr} m g \cos(\arctan(G_{r})) + V_{g} (91 + 8.7 V_{g}) 10^-3 + V_{g} m g \sin(\arctan(G_{r})) + 1/2 (m + I/r^2) (v_{f}^2 - v_{i}^2) / (t_{f} - t_{i})] / E_{c}$

Breaking it down a bit:

• $V_{g}$ is the ground velocity, what one would measure with a mechanical speedometer.
• $V_{a}$ is the air velocity of the bicycle in motion, found by adding the raw ground velocity to the tangential air velocity, $V_{wtan}$ which itself is found by $V_{w} [\cos(D_{w} - D_{b})]$, where $V_{w}$ is the wind velocity, $D_{w}$ is the wind direction, and $D_{b}$ is the bicycle’s direction of travel.
• $\rho$ is the air density, found by computing, ${P_{d} \over R_{d} T} + {P_{v} \over R_{v} T}$, where $P_{d}$ is the partial pressure of dry air, $P_{v}$ is the pressure of water vapour, $T$ is the temperature (K), $R_{d}$ is the specific gas constant for dry air (287.058 J/(kg·K)) and $R_{v}$ is the specific gas constant for water vapor, (461.495 J/(kg·K)).
• $C_{d}$ is the coefficient of drag for a cyclist.
• $A$ is the projected frontal area of the cyclist.
• $F_{w}$ is the incremental area of the spokes of the wheel.
• $C_{rr}$ is the coefficient of rolling resistance.
• $m$ is the mass of the rider and bike together.
• $g$ is the acceleration of gravity (approximately $9.81 m/s^2$).
• $G_{r}$ is the road gradient expressed as a fraction (rise/run).
• $I$ is the moment of inertia of the wheels ( $0.14 kg*m^2$, used in this study).
• $v_{f}$ and $v_{i}$ are the final and initial velocities of the segment being computed (m/s).
• $t_{f}$ and $t_{i}$ are the final and initial times of the segment being computed (s).
• $E_{c}$ is the chain efficiency factor, which accounts for mechanical friction in the drive train of the bicycle.

The beautiful thing about this function, other than its accuracy — at least under the conditions of Martin, et al. — is that it can be built up incrementally using the data collected by the application, known constants, and a few estimates and functions found by the authors.

## The Algorithm

The current high-level algorithm (provisional, until it reaches production!) is as follows:

4. Load the set of schools whose Voronoi cells intersect with the ride.
5. Load the ride points, using a series of joins on the school, weather and DEM tables to retrieve the point’s time and location, along with the weather conditions for the point, the elevation and the slope.
2. Iterate over the points, applying the power function to each one.

## The Parameters

### The Rider

The rider’s characteristics are entered when the user registers. The most important is the rider’s mass, but if there’s a way to include age and gender, those would be loaded too.

### The Bike

The bike characteristics are mass and type, though the model as documented only supports road bikes.

### The Ride

The model uses parameters that are deduced from the rider’s position — on the “tops,” on the “hoods” or in the “drops” — which has a large effect on their frontal area. The photo at right shows Victoria professional cyclist Ryder Hesjedal climbing in the hoods position. The photo illustrates a curious problem with modelling the air drag of riders. He is very tall for a professional cyclist (6’2″, like me, but about 15kg lighter!), but rides a tiny frame with a very large drop between the saddle and handle bars. A typical rider is much more upright in this position.

A rider never maintains one position over an entire ride, but it would be onerous to manually tell the application what position was being used at each segment (as if anyone could remember that anyway). Should the rider enter the position they use most (usually on the hoods), or should they attempt to estimate what percentage of the ride they spend in each position? Should the rider even be given the option to choose? Once I decide this, this information will be stored as an attribute of the ride. A the most likely default is the hoods position.

### The School and Weather

The power function depends on wind speed and direction, but it also depends on the air density, which requires the air pressure, temperature and humidity. Counter-intuitively, perhaps, humid air is less dense than dry air, because the molar mass of a water molecule is lower than those of Oxygen and Nitrogen (Water is a single Oxygen with two Hydrogens; The Hydrogens are much lighter than the extra Oxygen on a molecule of Oxygen).

To accurately calculate the air temperature and pressure at the elevation of a ride point, the elevation of the school must be used to adjust its weather metrics to sea level, and then to the elevation of the cyclist.

### The Points

To parameterize the function the ride points must be traversed in chains of 3. This is because the time and velocity must be known for each pair of points that comprises a segment upon which the power will be calculated, and the velocity can only be known from point previous to the first of the pair. The relevant data associated with each point are the time and the position. All other data are derived. These are:

• The elevation — from the DEM.
• The velocity  — from time and distance between the pair.
• The direction  — the azimuth of the pair.
• The gradient — the rise (elevation) and run (distance) over the pair.

## The Implementation

The power function is implemented as a Python script in a plpython function. Plpython is a language that allows stored procedures to be created in the database where they can be called as standard function via, in this case, JDBC.

The queries that extract the ride, school and user information are standard SELECT queries, but the ride points query is a bit more involved:

WITH t AS (
SELECT row_number() over(PARTITION BY ride_id ORDER BY id) AS seq,
id, geom_snap, ST_Value(rast, geom_snap) AS elevation, point_time
FROM points, dem
WHERE ST_Contains(ST_ConvexHull(rast), geom_snap)
), u AS (
SELECT s.school_id, s.poly, w.wind_speed, w.wind_dir, w.humidity,
w.temperature, w.pressure,
date_trunc('minute', w.obs_date) AS obs_date
FROM weather w INNER JOIN schools s
ON w.school_id=s.id
)
SELECT
(p3.elevation-p2.elevation) / ST_Distance(p3.geom_snap, p2.geom_snap) AS slope,
(ST_Distance(p3.geom, p2.geom) * 1000.0) /
extract('second' from p3.point_time - p2.point_time) AS v,
(st_distance(p2.geom, p1.geom) * 1000.0) /
extract('second' from p2.point_time - p1.point_time) AS vi,
ST_Azimuth(p2.geom, p3.geom) AS azimuth,
p3.elevation, p3.point_time AS t, p2.point_time AS ti,
u.school_id, u.wind_speed * 0.277778 AS wind_speed,
u.wind_dir, u.humidity,
u.temperature, u.pressure * 100.0 AS pressure
FROM t p1, t p2, t p3, u
WHERE
p3.seq=p2.seq+1
AND p2.seq=p1.seq+1
AND ST_Contains(u.poly, p3.geom)
AND date_trunc('minute', p3.point_time)=u.obs_date
AND p3.ride_id=$1 The subquery t creates a list of points from a ride with a temporary consecutive sequence number. The sequence number is required so that triples of points can be assembled. As explained in a previous post, it is possible for the points of two rides to be interleaved, so the actual IDs cannot be relied upon to be consecutive. This subquery also extracts the elevation of the point from the DEM. The subquery u selects the attributes of a school, and the weather data related to that school. The main query uses the information selected from u and t to derive the other required attributes, such as slope, velocity (final and initial), time (final and initial) and azimuth, and to convert the units of some attributes to the appropriate form (e.g. km/h to m/s). This query actually ignores the first and second points in the ride. The part of the query that includes them is left out for clarity. Finally, a Python implementation of the model above computes these values into a power estimate for each pair of points. The listing below duplicates the worked example provided in the paper. from math import cos, sin, atan2, atan, pi def rad(val): return val * (pi / 180.0) # Acceleration of gravity (ms^-2) g = 9.80665 # Moment of inertia of wheels I = 0.14 # Chain drag factor Ec = 0.976 # Mass of rider (kg) mr = 80.0 # Cyclist drag area (m^2) CdA = 0.2565 # Wheel radius (m) r = 0.311 # Mass of bike (kg) mb = 10.0 # Mass of rider and bike m = mr + mb # Riding direction (deg) Dc = 340.0 # Final velocity at current node (m/s) Vf = 8.45 # Initial velocity (previous node) Vi = 8.28 # Time at current node (s) Tf = 56.42 # Time at initial node Ti = 0.0 # Road gradient (rise/run) Gr = 0.003 # Wind direction (deg) Dw = 310.0 # Wind velocity (m/s) Vw = 2.94 # Coefficient of rolling resistance (from doc) Crr = 0.0032 # Incremental spoke drag factor Fw = 0.0044 # Air density p = 1.2234 # Average velocity over the segment (m/s). Vg = 471.8 / (Tf - Ti) # Air velocity and yaw angle (rad) Vwtan = Vw * cos(rad(Dw) - rad(Dc)) Vwnor = abs(Vw * sin(rad(Dw) - rad(Dc))) Va = Vg + Vwtan Yaw = atan2(Vwnor, Va) # Power to overcome aerodynamic drag for bike and rider. Pat = Va**2 * Vg * 0.5 * p * (CdA + Fw) # Power to overcome rolling resistance Prr = Vg * cos(atan(Gr)) * Crr * m * g # Power to overcome bearing resistance Pwb = Vg * (91 + 8.7 * Vg) * 10**-3 # Power to overcome changes in potential energy Ppe = Vg * m * g * sin(atan(Gr)) # Power to overcome changes in kinetic energy Pke = 0.5 * (m + I/r**2) * (Vf**2 - Vi**2) / (Tf - Ti) # Net power Pnet = Pat + Prr + Pwb + Ppe + Pke # Total power considering drivetrain drag. Ptot = Pnet / Ec print "Ptot", Ptot  The output of this script is 213.357179977W, which closely matches the reported output of 213.3W. The difference is accounted for by the fact that I used a more precise definition of the acceleration of gravity, and did not round any intermediate values. A few notes relating to this (simplified) script: 1. I calculate the air density (rho, or p in the Python script) based on the humidity, pressure and temperature, where temperature and pressure are corrected to the rider’s elevation by adjusting for the school’s and rider’s elevations. Here a constant is used, with no explanation. 2. The rider and bike masses will come from the database. 3. Vg and Vf are the same value, because it’s impossible to determine the instantaneous velocity of the rider using GPS tracks. 4. Some parameters, such as the chain drag and wheel wind resistance are highly variable. I may be able to find better replacements for some, but not others. 5. The frontal area of the rider is calculated using the assumptions in the paper. As mentioned above, this value varies with rider mass, position, bike size and probably other factors. 6. A couple of these constants (e.g. Fw) are not well explained. This post is part 2. Start here. ## The Server Side ### Overview When a user uploads a ride to the Web application, there are several processing steps that must be performed, before the power can be displayed. 1. The GPX representation is converted into a standard in-memory representation that can be stored to and retrieved from a database. 2. The weather data for the time span of the ride, and the school zones (Voronoi cells) that the ride passes through must be downloaded and/or retrieved. 3. The ride points must be snapped to the road network. 4. The elevations of each node in the ride must be read from the DEM and slopes calculated. 5. The power is calculated. Steps 1 and 2 (if the weather data has already been retrieved) can be performed synchronously — that is, while-you-wait — but the other steps are time consuming, and the user should not be expected to sit and watch a spinner while the application takes an indeterminate period of time to work. So the rides have to be put into a queue, where a background process can perform these actions when it has the time. The user should be able to access processing status messages, and should get an email when the processing is finished. Other Web services don’t need this processing step because they don’t factor in the weather, and if they did, they’d have it downloaded already. Plus, they have the computing resources to do things like snapping and elevation extraction nearly instantaneously. At right is an activity diagram which gives an overview of the processing system. The Processor class implements Runnable and ServletContextListener. The listener is configured in the Servlet container’s web.xml to start on context initialization, and stop when the context is destroyed. This application has only one context, so the processor is effectively a singleton. The REST API mediates between the Client and Server tiers in this diagram, but is not explicitly labelled. All REST calls have access to the ServletRequest, through which they can access the ServletContext, and hence the Processor, which provides a method, enqueueRide, through which the application can enqueue a ride to be processed. The processor’s run method idles until there’s a ride in the queue, at which point it removes the ride and performs the processing steps sequentially as in the diagram. The processor also notifies the application when a ride has finished processing or if processing is failed. It does this by sending a message to the server, which triggers an email. The client can also retrieve a ride’s status at any time through the REST API, to learn whether it is queued, processing, completed or failed. When a ride is first updated, it is saved to the database and is immediately available for display in its unprocessed form. This allows the user to see the ride displayed on the client. Behind the scenes, the ride is fed through the sausage factory… ### Processing The first thing the processor does is to ascertain whether the weather data for the school zones intersected by the path of the ride, and the time span covered by the ride, are available in the database. These checks are performed by first retrieving a list of schools via a spatial query: SELECT DISTINCT ON(s.school_id) s.school_id FROM schools s INNER JOIN points p ON ST_Contains(s.poly, p.geom) WHERE p.ride_id=? ORDER BY s.school_id Here, s.poly is the Voronoi cell of the school, and p.geom is a node in the ride track. Then, another query checks whether the data exist for each school and date. If data cannot be found for the given school and data, they are downloaded from the school weather station network. With wind data are retrieved, the wind speed and direction are set on every point using a spatial query: UPDATE points p SET wind_dir=t.wind_dir, wind_speed=t.wind_speed FROM ( SELECT s.poly, w.wind_speed, w.wind_dir, date_trunc('minute', w.obs_date) as obs_date FROM weather w INNER JOIN schools s ON w.school_id=s.id ) AS t WHERE ST_Contains(t.poly, p.geom) AND date_trunc('minute', p.time)=t.obs_date This query should actually include temperature, pressure and humidity as well, because the power algorithm requires parameters derived from these values. That algorithm will be explained later. The date_trunc function is used to round point times to the nearest minute, as the weather data are recorded with minute resolution, while rides are recorded with second resolution. It does occur that a school will stop reporting weather conditions. Perhaps it will be necessary to calculate local conditions by averaging neighbouring cells. I have not examined any solutions. It is also possible that a record is missing for the requested minute, but not neighbouring minutes. This query could be adjusted to locate the nearest record in time, but at what temporal distance does the nearest record become meaningless? Another question to ponder. Next the ride points are snapped to the road network, as described in a previous post, then the elevation and slope are computed, as described here. What remains is a ride with nodes containing the wind speed and direction and slope. All that is left is to calculate the power. ### Status All of the above is implemented, except for the power calculation and email notification. Of course, “implemented” and “durable under real-world loads” are two entirely different things… It’s about time I started talking about actually building the application, so I’m going to start looking at it from the other end, for a while. The goal is to build a desktop/mobile application that can load GPS data for bicycle rides, process it to determine the per-segment and overall power production of the rider, and display this information. There are two pillars in the application, the client and services. ## The Client The client application is being developed as a JavaScript application which communicates with REST services provided by the server. The client is simple, and is built using native JavaScript, along with some small libraries that I developed for my convenience — no industrial-grade libraries (jQuery, Bootstrap, etc.) will be required, with the exception of OpenLayers 3.0, which will provide the map interface. This is a high-level state diagram illustrating the paths through the application. Some comments: • Every interaction in the application requires the user to be signed in, so this state is checked at every transition. Because it is implicit, it is not included here. When a transition is attempted and the user is found to be signed out, they are redirected to the Login view. • The user can sign out, or return to the menu from any screen. Returning to the menu while editing persistent information effectively cancels the edit. ### Views • The Login view does what it says — if the user is not logged in, this screen allows them to do so, or provides a link to the Registration page. When the credentials are submitted and the user is authenticated, they are redirected to the Menu. • The Registration view allows the user to choose a username and password, and to enter some preliminary data, such as gender, body weight and whatever else might be useful for the application’s purposes. When the user is registered, the application restarts. At this stage, complex email validation schemes are out of scope. • The main Menu provides access to the different features of the application. The user can load ride data, edit their profile, add or edit bicycles or view existing rides. • The Ride Source Selection screen enables the rider to choose the source of the data they’d like to process. Once the data has been processed, the user is redirected to the Ride View (below). The three source options are: • GPX File — a GPX file will be uploaded and processed by the application. • GPX URL — a GPX file will be downloaded from a URL and processed. • Strava Ride — a GPX file representing a ride will be downloaded from Strava and reprocessed. • The Ride List provides a list of rides that have already been processed selecting a ride directs the user to the ride view. • The Ride View displays speed, power and other metrics from the ride (layout yet to be determined), along with a map of the ride. • The User Information view provides access to the user’s details, some of which will have been entered in the registration phase. • The Bike List is a list of bikes that the rider may use. When a ride is processed, it is important to know some details about the bike, particularly the type (road or mountain bike) and weight. This view provides the ability to add and edit bikes. • The Bike Information view provides access to information for a single bike. ### Status The Bike Information, User Information, GPX URL, and Strava Ride features have yet to be implemented. Everything else is in a workable, if not exactly presentable, state — but you’ll have to take my word for that right now! ## The Server Side ### The API Services are provided by a Servlet running on a Tomcat instance on a Linode. The data store is PostgreSQL database with the PostGIS and pgRouting extensions. REST capability and JSON serialization/deserialization are provided by an existing library that I designed for previous project. Java has a standard REST application development framework (JAX-RS), which does essentially the same job as my own framework. But time is of the essence, and learning a new framework, while worthwhile, would have taken too much time. REST is an oft-touted, but poorly understood (at least by me, initially) paradigm for building Web-accessible APIs. Essentially, it consists of nouns (resources) and HTTP verbs, such as PUT, POST, DELETE and GET. Nouns are realized as URLs, in the form, /things/{id}/property In this instance, “property” is an attribute of the “thing” identified by “id,” which can be mutated by PUTting a value, or retrieved by GETting it. Like many things in programming it is easy to claim to have implemented a REST API, and yet to have a hybrid system where the resource dictionary is polluted with verbs. For example, one might implement the login API using the intuitive URL, /users/login and POST the user’s credentials to it. This is not RESTful, because “login” is a verb, not a noun. This URL is not a resource. A more RESTful way would be to POST the user’s credentials to the sessions list, which would have the effect of creating a new session and returning a unique token: /sessions The user could then retrieve (GET) the session resource using a token returned by the PUT action: /sessions/{token} Since one of REST’s objectives is to be stateless, the use of session variables and cookies to preserve the user’s application state is deprecated — this information should be stored on the client and passed explicitly through resource requests. But why do it this way? As with most constrained APIs, the ultimate goals are simplicity, maintainability and scalability. More generally, communication between tiers in an application (or a society) is easier when the the lingua franca is standardized. ### The Processor The next post will feature a discussion of the processing system. I usually record rides with my Google Nexus 4 phone, which seems to do a very bad job of recording the elevation. With an accurate DEM, and points snapped to an accurate road network, it becomes possible to recalculate the elevation of each track node and then determine the slope of each segment. Assuming the track and the DEM are loaded into a PostGIS database, this is easily done with a couple of SQL queries. The first query updates the elevation of the track with new values read from the DEM using the snapped points: update points set geom_snap=st_geomfromtext( 'POINTZ(' || st_x(geom_snap) || ' ' || st_y(geom_snap) || ' ' || st_value(rast, geom_snap) || ')', 32610) from dem where ride_id=$1 and st_contains(st_convexhull(rast), geom_snap);


This query assembles a new 3D point from the horizontal coordinates of the original snapped point and a new z-value read from the raster. Using the spatial (GiST) index on the raster table, the query takes 2770ms over 2243 points on my development machine. Not super fast, but acceptable.

The next step is to calculate slope. The first point in the track has a slope of zero, and every subsequent point’s slope is calculated between itself, and the previous one. This can also be accomplished with an SQL query:

with t as (
select
row_number() over(partition by ride_id order by id) as seq,
id, geom_snap
from points
where ride_id=11
order by id
)
select (st_z(p1)-st_z(p2))/st_distance(p1, p2)
from (
select a.geom_snap as p1, b.geom_snap as p2
from t a, t b
where a.seq=b.seq+1
) as u;


This query uses the row_number window function to  create a sequential ID for each point in the selection from the points table. All nodes from all GPX tracks are stored in the same table, so it is possible to have interleaved row IDs from different rides. If we wanted to extract sequential points from a ride, we could not depend on these IDs because they might not be consecutive.

The subquery (t) is used to extract pairs of consecutive points by performing a Cartesian product on two instances of the subquery, filtered by points that have consecutive sequence numbers.

So far so good, but there’s a problem: This query doesn’t protect against coincident points, which have a distance of zero, resulting in a division-by-zero error. That’s easy enough to correct, but why would it happen in the first place? Coincident points are pretty unlikely in the real world… The answer is that the snapping procedure may have snapped some points to the same location — in particular, this can happen when two points are snapped to the endpoint of a segment that is nearest to both. That’s the hypothesis, anyway.

with t as (
select
row_number() over(partition by ride_id order by id) as seq,
id, geom_snap
from points
where ride_id=11
order by id
)
select id1, id2
from (
select a.id as id1, b.id as id2, a.geom_snap as p1, b.geom_snap as p2
from t a, t b
where a.seq=b.seq+1
) as u
where st_distance(p1, p2)=0;


Here, I’ve modified the slope query to find IDs of coincident points, and there are many!

 id1  |  id2
-------+-------
11711 | 11710
11712 | 11711
11713 | 11712
11714 | 11713
11986 | 11985
12240 | 12239
12241 | 12240
12305 | 12304
12489 | 12488
12490 | 12489
12491 | 12490
(etc...)


It’s easy to see from this image that several of the original (blue) points have been snapped to the previous segment, and are grouped in the same place. Fortunately the fix is easy. In the snapping algorithm (implemented as a plpython function), the last segment is not included in the set of geometry IDs returned by the Dijkstra function, but it is known, because the second anchor point is associated with it. Adding this segments to the proximity search solves the problem.

But there are other problems:

• When the route rounds a corner, the GPS points tend to bunch up because they snap to the nearest edge, which is aligned so that it’s first vertex is closest to the points. I would need to space the points out somehow. I’ve elected to do this by spacing them out proportionally along the shortest route between two confident points.
• It turns out not to be possible to avoid snapping some points to ways that are parallel to the actual way that was used. This problem is not solvable — the machine has no way of knowing whether I was riding on the trail, or on the road that runs parallel to it. The only way to fix this is to manually edit the track, something I’d like to avoid. I’ve solved this problem, tentatively, by eliminating some classes of road segment from the confident point-finding algorithm. That is, only points that are unambiguously associated with a road or a bike path are allowed.
• Etc.

At some point, though, I just have to accept that the system cannot be perfected and move forward with the project. So here are two final interesting issues.

One of the artifacts of LiDAR classification is the removal of bridges. The abutments of the Johnson Street Bridge (left) are tall and steep, and the slope-finding algorithm thinks that I have ridden down to the surface of the water, a slope of -67%. Steep! Not much I can do about that.

The other issue is that the maintainers of Open Street Map are a little too ambitious. The bridge is currently under construction, and the approaches have changed. The new route and the old route are represented in the line work, but the temporary route isn’t. The algorithm follows the GPS track as far as it can, then backs up and takes a more plausible route — the old one — and snaps correctly to it. The under-construction route is labelled, “construction,” but for now it does not accurately represent the path of the road.

Now, each GPS position has an accurate elevation and slope — sort of — but as I mentioned earlier, there’s no real basis for positioning the GPS points where they are along the road, even if I know I was on the road. The reasoning behind this strategy is that the vertical error introduced by using points that are a few meters out of place along a road are much smaller than errors introduced by not having points on the road in the first place.

The last image on the right demonstrates the advantage. One of the original GPS points is in a ravine, nearly six meters below the surface of the road! The snapped points show a much more consistent, realistic gradient.

GPS accuracy can vary, and is usually pretty bad, particularly in “urban canyon” (multipath error) or forested environments. This requires the geographer to snap the locations to a road network (assuming it is known that a subject travelled along a road) in order to do any useful analysis. This, to put it lightly, is a challenge.

In one sense, snapping is a fool’s errand — Strava’s official policy is that they won’t do it, and for good reason: the relative accuracy of the time stamps of the locations in a GPS track is pretty high, though sometimes imprecise, but the positional accuracy is often terrible. It is impossible to know with certainty how fast a subject was going because the exact length of a segment is unknown. Even if one can snap the locations to the road network with 100% accuracy, the distance from one to the next is not solvable. The only way to overcome this would be to record the average speed of the subject during the segment using an external, mechanical sensor. Strava’s policy seems to be that you can not improve accuracy by snapping, so why do it?

I can think of one reason. The slope of a segment is a critical parameter to the power function. The DEM used in this project is a highly accurate 2m, LiDAR-derived raster, with vertical accuracy in the range of 30cm. Roadways tend to have a consistent, or gradually-changing, slope lengthwise, but the surrounding terrain can have radical variations in elevation within a few meters of the pavement. The accuracy of the DEM means that both the gradual slope of the roadway and the radical topography of the surrounding environment are accurately represented. Snapping the track locations to the roadway guarantees that elevation values are taken from the pavement.

Assuming there is a good algorithm for snapping locations to the road network, and accepting that there is no good way of correcting the distances between snapped locations, is the slope from snapped positions still useful? If the power calculations are performed on un-snapped points, does it make sense to use the slopes from snapped points in the power function?

I’ll revisit that some other time. For now, here’s how I did the snapping.

The algorithm, arrived at through some study of the literature, and a lot of trial and error is as follows:

1. Locate the nearest road segments to each point, within a given distance, ordered by the distance from the point.
2. Calculate the difference between the azimuth of the segment ending with the point, and azimuth of the nearest road section.
3. Determine the confidence of of the match between a point and a road segment. A match is confident if:
1. there is only one road segment within the search distance,
2. the point is within a given distance of the segment, and
3. the azimuthal difference is small.
4. Snap the confident points to the road network at the nearest point on the road segment.
5. Between each pair of confident matches, determine the shortest path along the road network using Dijkstra’s shortest path algorithm.
6. Match the un-snapped points to the segments returned by the shortest path search.
7. Snap the un-snapped points.

The algorithm was realized as a PostgreSQL function which is dependent on the plpython, PostGIS and pgRouting extensions. Postgresql functions are loaded into the database and executed within the database’s process space. plpython allows these functions to be written in Python. pgRouting is a graph manipulation extension that builds graphs from linework (roads) and provides methods for traversing them.

The road network was created from OpenStreetMap data clipped to the Victoria area. All lines that did not have a highway designation, or for which the highway designation was “footway,” were deleted. Then the linestrings were decomposed into individual segments, with each row in the table containing one segment. For each segment, the azimuth was computed and modulated to 180 degrees (North and South are irrelevant).

A GPS track was downloaded from Strava for testing. The ride begins at the University of Victoria and ends at my home, a distance of about 20km. The track was parsed by a custom made Java library and saved to the database as a point set, with each of the 2326 points having an x, y, z and time coordinate. GPS tracks with power data will also have a power attribute.

First, the points in the ride are selected in order, by SQL:

    // Select the points of the ride
select id, azimuth from points where ride_id=$1 order by id  A Python script loops the points. For each, the candidate (nearest, within the search distance) road segments are found.  // Select the road segment match(es) for each point. select distinct on(l.osm_id) l.gid, l.osm_id, st_distance(l.geom, p.geom) as dist, abs(l.azimuth-p.azimuth) as az from osm_line l, points p where p.id=$1 and st_dwithin(l.geom, p.geom, 10.0)
order by l.osm_id, st_distance(l.geom, p.geom)


The ID of the nearest segment, and the number of candidate segments are attached to each point object, and the averages of the distance and azimuthal differences are recorded.

The next step is to identify “anchor” points, those for which the confidence of a match with a road segment is high. I identified these, somewhat arbitrarily, by the criteria given in step 3, using 3x the average distance and 2x the average azimuthal difference. These values could certainly be refined, but they suffice to demonstrate the algorithm.

The final step in the algorithm is to iterate over each pair of anchor points that is separated by one or more intermediate points. pgRouting is used to find the shortest path between the anchor points, and the intermediate points are snapped to this path using a similar method to step 1.


// Find the shortest path between two points.
select id2 as gid
from pgr_dijkstra('select gid as id, source, target, cost_len as cost from osm_line',
(select source from osm_line where gid=$1), (select target from osm_line where gid=$2),
false, false)

// Snap the intermediate points to the shortest path.
update points
set geom_snap=st_geomfromtext('POINT(' || st_x(t.pt) || ' ' || st_y(t.pt) || ' 0.0)', 32610)
from (
select id, gid, st_closestpoint(lgeom, pgeom) as pt
from (
select p.id, l.gid, l.geom as lgeom, p.geom as pgeom
from osm_line l, points p
where p.id=\$1 and l.gid in({})
order by st_distance(p.geom, l.geom)
limit 1
) as u
) as t
where points.id=t.id


The results of this procedure are pretty convincing, and some tough cases are well-handled. The image below shows a scenario where two roads merge, and the points in the track snap to the wrong road.

These points are eliminated as confident points, and instead snapped to the shortest path, as below.

The algorithm also performs well around tight corners, in complex routes.

Of course, this algorithm is not correct in the mathematical sense, but it should do a good job in assisting the extraction of elevations from the DEM for the calculation of the slopes (to be covered in another post).

I said I would examine universal Kriging in this post, but there has been a change of plans. After some consultation with my geostatistics guru on the matter of variograms and trend surfaces, we’ve come to the conclusion that a simpler approach might be appropriate. However, I’ll explain the last part of the exploration of Kriging first.

After preparing a variogram cloud, I determined, with the help of O’Sullivan and Unwin, that it would be necessary to compute a trend surface and examine the residuals before choosing a model. I computed the trend surface using Java and Apache’s Commons Math matrix libraries. The program  produced the coefficients for the trend plane and a CSV file containing the geolocated observations. These, I plotted using GNUplot.

This is the trend surface that tracks the change in mean (drift) across the field.

This is the semivariogram of the residuals. As you can see, not much has changed. At this point we decided that I could continue down this path or try a simpler strategy, such as Voronoi polygons.

In short, a Voronoi diagram is a partitioning of space (in this case, the plane) wherein any point in a given cell is closest to the observation enclosed by that cell. To the left is an image for the Voronoi diagram of schools in Victoria. (Note that where more than one school appears in a cell, it’s just an illusion created by similar colours.)

Over most of the study area, the school network is fairly dense, which obviates the need for an interpolation of the wind values. Besides, Kriging was never expected to be accurate because of the terrain considerations. Using a Voronoi partitioning has several advantages over Kriging:

• Initially, I had expected to produce a raster for every minute during a ride (as that is the temporal resolution of the weather data.) Of course, the goal of Kriging is to produce a function which can estimate a value based on observations, so it would not, in fact, have been necessary to produce an entire raster. Still, using Voronoi polygons will offload the work of finding the estimate onto the spatial database — for a given node in the GPS track, the value associated with the containing polygon is the estimate.
• The estimate for any node will have the same value as the observed value. Naturally, given the spatial displacement between the estimate and the observation, this implies some error, but the interpolated estimate would have contained error, too. At least the observed value is true, somewhere.
• Fitting a model to the semivariogram is a bit of a black art. This strategy bypasses it entirely.
• It’s time to move on from this phase of the project and build a Web application. The Voronoi solution is quick and easy to implement.

I installed a PostGIS Voronoi function from someone named “GeoGeek” (PostGIS does not yet have a native Voronoi function) and ran it against the list of schools, producing the geometries pictured above. So far, so good.

For the sake of fun, here’s snapshot of wind changes in Victoria over a 15-minute period in July, 2014 (missing or intermittent arrow are stations not reporting.)

To learn more about the spatial dependence of wind measurements in the Victoria area, I had to build a semivariogram.  This requires a bit of wind data to work with so I began building the part of the application that downloads weather records.

The data downloader consumes CSV files generated by VictoriaWeather.ca and stores them in a PostGIS database, associated with each school. The CSV files aren’t actually generated until the user-facing HTML page is viewed (this is natural, given that a user usually accesses the CSV file through that page), so the downloader attempts to load the CSV file first. If that fails with a 404 status (file not found), an attempt is made to retrieve the HTML page. If that succeeds, the CSV file is requested again. If this request fails a second time, the data for that time is considered non-existent and the update is abandoned.

The list of schools was loaded into a PostGIS table from the XML file provided by VictoriaWeather.ca. To get the weather records for each school, a program iterates over the list of schools, downloading the CSV file for each one. The loop contains a call to Thread.sleep(), which prevents the appearance of a DOS attack. The program is clever enough to check the database for weather records related to a particular time and school first, to avoid requesting the same data files multiple times.

Next, I created a Java utility to generate a semivariogram. This program loads the school list from the database and compares each to every other, computing the distance between them. The pairs are distributed into bins (I chose 1000m for this exercise). (The school positions had been projected onto the UTM grid, so calculating the distance is a simple matter of Pythagoras.) At the same time, the difference in value for each school’s weather record (for both wind speed and direction) is squared and halved, and saved with each pairing.

From this dataset, a variogram cloud and a semivariogram were produced (using gnuplot):

The first plot displays the semivariance of all pairs of schools. Unfortunately, it is difficult to discern any of the important properties (i.e., the nugget, range and sill) from this plot. Maybe the semivariogram would be more illuminating…

More illuminating, but perhaps not in the way I had hoped. This plot indicates that the data follow a linear trend, known as drift. For ordinary Kriging, the assumption is that the mean value of the field is uniform, so that strategy is immediately eliminated. However, in universal Kriging, a trend surface is created to represent this drift and the analysis is performed on the residuals (O’Sullivan and Unwin, 2012).

The next post will examine universal Kriging and attempt to apply it to this problem. 3D View of Victoria Derived from ASTER DEM

The first challenge for this project is determining which method to use for modelling the wind flow throughout the region. Cyclists know from experience that turbulent wind patterns can cause rapid variations in wind direction and speed, causing nearly instantaneous changes in effort, from coasting with a tailwind to battling a headwind. Of course, it is not possible to model the wind to that level of detail, particularly in Victoria, where one can be riding through a canyon of trees one minute and across a farmer’s field the next. So what is the best method? I’ll examine two of the most likely here, and make a tentative selection.

Computational Fluid Dynamics Approach

This method uses a fluid dynamics solving program to build a model of the study area and compute the wind (or other fluid) patterns within it.

The computational domain is first “discretized.” This involves splitting the three-dimensional space into cells, generally hexahedrons. First, a “background mesh” is formed for the domain, and then the mesh is refined to conform to whatever objects or surfaces are placed within it. In this case, the terrain model. Hardin (2013) and Tapia (2009) give a concise overview of the principles behind the CFD approach and the practical steps of performing an analysis (Hardin provides configuration files and a Python script). STL (STereo Lithography, or Standard Tessellation Language) is a format for storing tessellations, and is used as the starting point for the mesh that would be fed to the modelling software (see a tessellation of the ASTER DEM for the Victoria area at right.)

Discretization is more than a spatial partitioning. Fluid flows are computed using partial differential equations, such as the Navier-Stokes equations. The discretization of these creates solvable linear equations that are associated with the centroid of each block. The solver computes a value for each centroid and interpolates the result to the rest of the block.

The inputs to the solver are “boundary conditions,” which describe the state of the fluid as it enters the domain. For example, a boundary condition might indicate the gradient (the relative velocity of wind at ground level versus at the ceiling of the domain), the wind speed and wind direction.

The benefits to using a CFD approach are:

• Very precise solutions using models from physics.
• Readily available free software and models.
• Accounts for the influence of terrain.

The drawbacks to this approach are many:

• Very computationally intensive.
• Uses engineering principles that are beside the point and beyond the scope of this project, and beyond the knowledge of its author.
• Difficult to configure, particularly given the lack of engineering knowledge.
• The boundary conditions are difficult to derive, given interior conditions (Schneiderbauer, S., & Pirker, S., 2011).

This last drawback is important because it describes the data exactly: the weather stations record interior conditions. Also, the application demands that the wind map be computed for every minute during a ride that could last for hours. The memory and processor demands, and hence the time taken for the power calculation, would likely be intolerable.

From a learning perspective, CFD is simply too abstruse. There isn’t enough time to learn the principles of fluid dynamics — though it would be interesting to do so — and still deliver on the project. Without learning the principles, the CFD modelling software is just a very finicky “black box.” A CFD solution may be a good exercise for refining the model in the future.

Kriging

Kriging is an interpolation method that computes unknown values in unobserved locations, given known values in observed locations. Basically, this approach would “fill in” the cells of a wind speed/direction grid by estimating values from nearby measurements. Essentially, Kriging determines the value at an unknown location using weighted linear combinations of known samples. Kriging is superior to more naive interpolation strategies because it considers the spatial dependence between observations (Yaseen, 2013).

Kriging begins with the semivariogram, which plots the binned linear distance between each pair of sampled locations on one axis, and one half (hence, “semi”) the average of the squared differences for linked points in each bin on the other (Binning empirical semivariograms, 2013Yaseen, 2013).

The semivariogram has three properties which are important to for determination of weights. First, theoretically, the variance in the property of interest should approach zero when the Cartesian distance between two locations approaches zero. On a graph, a curve fit to the distribution would have a y-intercept of zero. However, in practice, the y-intercept is often not zero, a phenomenon attributable to errors or variations that only become significant at small scales (Understanding a semivariogram, 2013).

The other important properties are the range and sill. The range is the horizontal location where the curve appears to level out. This corresponds to the distance between locations at which spatial autocorrelation becomes insignificant. The y-value at the range is called the sill (Understanding a semivariogram, 2013). The range and sill can be determined by visually assessing a plot of the semivariogram, or by some automated method.

As described, the semivariogram assumes isotropy. That is, the forces affecting the values are the same in all directions. The semivariogram can also accomodate zonal and geometric anisotropy. In the latter case, the sill remains the same, but the range may be different depending on direction. In the former case, the sill and range may change (Yaseen, 2013).

With the semivariogram assembled, a model is selected which best fits the distribution and the weights are derived from the model for each point pair. Then the weights are applied in the calculation of the value for each unknown location.

Benefits to using Kriging:

• Reasonably fast and efficient.
• Documented implementations are available (e.g. Lang, 2014).
• At long last, an opportunity to grok something I should have learned in GEOG418.

Drawbacks:

• Doesn’t inherently account for the effects of terrain.
• Have to figure out which type of Kriging to use (Ordinary? Simple?)

The terrain issue should be fatal for Kriging. If, as often occurs in Victoria, two weather stations are found on either side of a significant topographical feature, Kriging cannot account for the diversion of wind flows around or over the feature. This would not be significant at high elevations or large geographic scales, but the cyclist operates at low elevation on a small scale. However, given the lack of practical, computationally-simple options, and the reasonable density of weather stations in  Victoria, kriging might have to suffice.

Other Methods

There are other methods for estimating wind flow, but those that are not CFD solutions tend not to consider terrain, and to employ some species of Kriging. Some operate at too large a scale to be useful for analysis of this type.

• Artificial Neural Networks (Cellura, 2008Beccali, 2010Fadare, 2010Robert, 2013).
• Multi-layer perception (MLP)