Proof of part of haversine formula?

Proof of part of haversine formula?

I'm trying to study how to find the distance between two points if their latitudes and longitudes are given.

I have a question about the proof of part of the haversine formula given at Math Forum. It says that the length of chord AD, two points at the same latitude, lat1, on a sphere of radius 1, is


but I couldn't get how they obtained it. Could you help me?

The radius, r, of the small circle joining all points at latitude, φ is

r = R cos φ

where R is the radius of the sphere. That simplifies to

r = cos φ

if we assume a "unit sphere" (R = 1) for convenience.

--------------------- A/D | r φ / | / | / | / |a / |x / |i / |s / | / R | / | / | / | / | / | / ("side" view) | / | / | / |/ φ equatorial radius -----------------------------------------------

The chord length of a straight line, AD, joining two points on the same latitude is

AD = 2 r sin dλ/2

where dλ is the difference in longitude of A and D. Thus

AD = 2 R cos φ sin dλ/2


AD = 2 cos φ sin dλ/2

if R = 1

A-----------------D  | /  | /  | /  | / r  | /  dλ /  /  / ("top" view)

Talk:Haversine formula/Alt

The haversine formula is an equation important in navigation, giving distances between two points on a sphere from their longitudes and latitudes. It is a special case of a more general formula in spherical trigonometry, the law of haversines, relating the sides and angles of spherical "triangles". These names follow from the fact that they are customarily written in terms of the haversine function, defined by:

For two points on a sphere (of radius R) with latitudes φ1 and φ2, latitude separation Δφ = φ1 − φ2, and longitude separation Δλ, the distance d between the two points (along a great circle of the sphere, as usual for spherical geometry) is related to their locations by the formula:

Let h denote haversin(d/R), given from above. One can then solve for d either by simply applying the inverse haversine (if available) or by using the arcsine (inverse sine) function:

The haversine formula could of course be expressed entirely in terms of the more common trigonometric functions such as sine and cosine. But in the era before the digital calculator, the use of detailed printed tables for the haversine/inverse-haversine and its logarithm (to aid multiplications) saved navigators from squaring sines, computing square roots, etc., a process both arduous and likely to exacerbate small errors (see also versine).

The remainder of this section would have to be introduced by a discussion of why, even with computers, haversines remain popular. Since I don't really understand this well myself, I hope somebody else will write it. The section could include inter alia the following snippets I've collected from the original article:

As described below, a similar formula can also be written in terms of cosines (sometimes called the law of cosines, not to be confused with the law of cosines for plane geometry) instead of haversines, but suffers numerical precision problems for the common case of small distances/angles, which render it unsuitable for serious use.

The formulas could equally be written in terms of the versine function (twice the haversine). Historically, the haversine had, perhaps, a slight advantage in that its maximum is one, so that logarithmic tables of its values could end at zero. These days, the haversine form is also convenient in that it has no coefficient in front of the sin 2 function.)

Although it avoids many problems associated with other systems, the haversine formula has its own weaknesses. Care must be taken to ensure that h does not exceed 1 due to a floating point error (d is only real for h from 0 to 1). But h only approaches 1 for antipodal points (ones on diametrically opposite sides of the sphere)—in this region, relatively large numerical errors tend to arise in the formula when finite precision is used. Because d is then large (approaching πR, half the circumference) a small error is often not a major concern in this unusual case. (The formula above is sometimes written in terms of the arctangent function, but this suffers from similar numerical problems near h = 1.)

It should be noted that any of these formulae can only be an approximation when applied to the Earth, because the planet is not a perfect sphere: its radius R varies from 6356.78 km at the poles to 6378.14 km at the equator. There are small corrections, typically on the order of 0.1% (assuming the geometric mean R = 6367.45 km is used everywhere), because of this slight ellipticity of the planet.

Given a unit sphere, a "triangle" on the surface of the sphere is defined by the great circles connecting three points u, v, and w on the sphere. If the lengths of these three sides are a (from u to v), b (from u to w), and c (from v to w), and the angle of the corner opposite c is C, then the law of haversines states:

Spherical triangle solved by

Since this is a unit sphere, the lengths a, b, and c are simply equal to the angles (in radians) subtended by those sides from the center of the sphere (for a non-unit sphere, they are the distances divided by the radius).

In order to get the haversine formula of the previous section from this law, one simply considers the special case where u is the north pole, while v and w are the two points whose separation d is to be determined. In that case, a and b are π/2 - φ1,2 (i.e., 90° − latitude), C is the longitude separation Δλ, and c is the desired d/R. Noting that sin(π/2 - φ) = cos(φ), the haversine formula immediately follows.

A proof of the law of haversines can be constructed as follows. Let u, v, and w denote the unit vectors from the center of the sphere to those corners of the triangle. Then, the lengths (angles) of the sides are given by:

To get the angle C, we need the tangent vectors ta and tb at u along the directions of sides a and b, respectively. For example, the tangent vector ta is the unit vector perpendicular to u in the u-v plane, whose direction is given by the component of v perpendicular to u. This means:

where for the denominator we have used the Pythagorean identity sin 2 (a) = 1 − cos 2 (a). Similarly,

Then, the angle C is given by:

which leads to the (spherical) law of cosines, also called the cosine rule for sides:

As mentioned above, this formula is an ill-conditioned way of solving for c in the case of small triangles. Instead, we substitute the identity that cos(θ) = 1 − 2 haversin(&theta), and also employ the addition identity cos(ab) = cos(a) cos(b) + sin(a) sin(b), to obtain the law of haversines, above.

Calculating distance between two geographic points (Python recipe) by Bartek Górny

Implementation of Haversine formula for calculating distance between points on a sphere. Given geographic coordinates, returns distance in kilometers.

Returns an orthodromic distance, assuming the Earth is an exact sphere. Not recommended for travel planning ) Can be useful e.g. for recalculating GPS data.


Thanks for nice recipe. I have just search such routines to calculate distance from Google Maps .kmz files, and I used your routines in my recipe:

as is a keyword and cannot be used in recalculate_coordinate() parameters list.

Thanks for the warning - it worked for me because I still use Python 2.5.

(the usual way is to use a trailing underscore (as_) because a leading underscore (_as) means "private")

"c = 2 * math.atan2(math.sqrt(a), math.sqrt(1-a))" can be simplified to "c = 2 * math.asin(math.sqrt(a))"

Sorry guys, I am such a newbie at python but not programming in general. I know nearly all the basic expressions and progressing well.

Ok from the top. I have designed a weather balloon with a fully fledged GPS tracking system but have found a small fault in the Omni-directional antenna maths. My solution is to use a Yagi Antenna but that brings up another problem. How are you going to aim it? My second solution uses a robotic aimer to aim it.

Since I am so keen on doing it myself and also do not have the budget to buy a system like this. So with a bit of fun. do it myself.

So far so good. Everything has gone to plan so far. the maths, the design, the budget, and some of the python math programming as well.

I made a simple equation to do the vertical aim: Start

Alongitude = input("Enter longitude of Antenna GPS: ")

Alatitude = input("Enter latitude of Antenna GPS: ")

Blongitude = input("Enter longitude of Balloon GPS: ")

Blatitude = input("Enter latitude of Balloon GPS: ")

aone = Blatitude - Alatitude

bone = Blongitude - Alongitude

cone = aone * 2 + bone * 2 / 2

Baltitude = input("Enter height of Balloon GPS: ")

Aaltitude = input("Enter height of Antenna GPS: ")

distance = cone * 2 + Baltitude * 2 / 2

Caltitude = Baltitude - Aaltitude

print "The degree of vertical angle is:"

print "The distance between the Balloon GPS and the Antenna GPS is:"

print "The degrees of horizontal angle is:"print ddone End

Note: the end result and the data entered will be automated

Now I just had to find or make a equation to sort out the latitude and longitude to find the distance to be used in the first equation and the bearing to aim the robot antenna.

But I can't somehow just get my head around this equation here. Funnily enough have gotten confused over the data input, the type of input (I should know by now) and the actual equation myself.

I have just turned 15 (dec) so excuse any mistakes I make or my lack of understanding but I have researched but cannot understand completely.

Could anyone explain this code to me in more detail and how to implement this to my design?


Automatic Identification System (AIS) is used to identify vessels in maritime navigation. Currently, it is used for various commercial purposes. However, the abundance and lack of quality of AIS data make it difficult to capitalize on its value. Therefore, an understanding of both the limitations of AIS data and the opportunities is important to maximize its value, but these have not been clearly stated in the existing literature. This study aims to help researchers and practitioners understand AIS data by identifying both the promises and perils of AIS data. We identify the different applications and limitations of AIS data in the literature and build upon them in a sequential mixed-design study. We first identify the promises and perils that exist in the literature. We then analyze AIS data from the port of Amsterdam quantitatively to detect noise and to find the perils researchers and practitioners could encounter. Our results incorporate quantitative findings with qualitative insights obtained from interviewing domain experts. This study extends the literature by considering multiple limitations of AIS data across different domains at the same time. Our results show that the amount of noise in AIS data depends on factors such as the equipment used, external factors, humans, dense traffic etc. The contribution that our paper makes is in combining and making a comprehensive list of both the promises and perils of AIS data. Consequently, this study helps researchers and practitioners to (i) identify the sources of noise, (ii) to reduce the noise in AIS data and (iii) use it for the benefits of their research or the optimization of their operations.

Proof of part of haversine formula? - Geographic Information Systems

Fairly obvious indeed: Lat/long are not coordinates on a plane, but on a sphere. In short, the closer you get to the poles, the shorter the distance between two points on the same. I want to say longitude.

If memory serves, they want the Haversine formula for a fairly good approximation across the world. I've spent too long working with map APIs :-(

. and besides, what are the units in? If lat/long are in degrees, then . never mind.

We had an interesting case of a complete inability to undertstand the problem domain here once, where we had to implement a currency converter. The specific exchange rate you were to use depended on whether you were buying or selling the currency. But the clown implementing it thought that the conversion factor (from e.g. Euros to Dollars) to sell e.g. the Euros was the reciprocal of the conversion factor (from the same e.g. Euros to Dollars) to buy the Euros. Looked at me completely blankly when I questioned him, and asked him why he had not sought out the specification before starting the job.

okay, it's an (understandable) maths whoopsy, but not really a major code wtf and it was apparently easy enough to track down the bug and fix it

The major wtf is that the dev didnt take 5 minutes to look up how to implement (or just find a working example of) the Haversine formula for distance between latLngs.

Another vote for TRWTF being: not knowing about haversines.

Back when I did web-dev, I had to implement the redesign of some pages which could calculate the distance from the visitor's location (if permitted) to a branch location specified by latitude and longitude. Luckily for me, I was supposed to re-implement the existing functionality on the shiny new design, not implement properly working functionality.

I never actually thought about whether we were doing this or not. Turns out we are doing it properly though since anyone who actually wants to get there is going to be starting from within the UK and almost certainly within a few miles, the difference is probably not very significant. Especially not compared to the difference between our as-the-crow-flies calculation and the actual distance by road which the potential customer will care about.

Wow. As others have mentioned, the real WTF is not using some form of ellipsoid-based algorithm. Flat-earth Pythag equations are going to be very wrong in many circumstances. Haversine is a fine approximation of reality for things not needing a huge degree of accuracy. If this app is simply telling users how far they are from a weather station, haversine will be good enough (you'll be within tens of meters of reality in most cases). If you need more accuracy - there are (ever) more complex algorithms that take into account the true(r) non-spheroidal nature of the earth.

@ray10k You might want to say longitude, but if you do, you'll be wrong.

Two points five degrees (of latitude) apart on the same longitude (e.g. 3°03′30″E) will be (subject to questions of the Earth being not-really-a-true-ellipsoid) be the same distance apart. You should say "same latitude", since lines of latitude have the same number of degrees of longitude on them, but progressively fewer miles as you go north.

@CarrieVS In the UK, the error of magnitude of the east/west component by using Pythag instead of Haversine (etc.) is in excess of 40%. (That is, the calculated distance is more than 40% bigger than the true distance, in the strictly east/west direction.)

Addendum 2016-12-06 08:17: That is, the points on the same longitude - separated by 5 degrees of latitude - will be the same distance apart no matter where they are.

Yeah, looked it up afterwards and I did indeed mix up longitude and latitude.

That's bigger than I had guessed, but our distances were very often shorter than the road distance by at least that proportion. In fact an over-estimate of 40% would probably have improved the accuracy at least as often as not.

and not a single mention of great circle (orthodromic. ) distance

Dunning-Kruger effect, I think. Who-ever wrote the Pythagoras solution wasn't merely ignorant of how to calculate distances on a round Earth, they were ignorant of their own ignorance. Maybe the lesson to take is that you should always learn more about the problem domain, even if you think you know it.

@Steve_The_Cynic and @CarrieVS , if your co-ordinates are latitude and longitude then indeed naively calculating the Pythagorean distance between them will be a big overestimate. If however they are British National Grid co-ordinates then the Pythagorean distance will be pretty accurate. Provided the input co-ordinates are accurate - taking a GPS latitude and longitude and naively converting that to national grid will be in error of tens of metres because GPS and national grid use different shapes for the Earth implying different lat-long co-ordinates for the same point.

Yeah, map projections are a rabbit hole.

Not by name, but that's the one you get with the haversine formula.

harvesine? L = R * arcos(sin(lat1) * sin(lat2) + cos(lat1) * cos(lat2) * cos(lon1 - lon2))

As someone who works with GPS and similar things every day, I can tell you that if you hear the term "Geoid Undulation" while at work, you are about to have a very bad day.

However, I don't really view this as a true WTF. The programmer thought he saw the problem, and just didn't know a lot about. the earth, or physics, or cartography, or geometry.

Honestly, I could see a LOT of people making that same mistake, and in fact I am pretty sure that people around my work have in the past. We all spent so much of our childhoods working in nice flat coordinate systems that it is really annoying to realize that the universe so often doesn't work that way.

"However, I don't really view this as a true WTF."

It is a true WTF. it is just not a programming WTF.

If I don't see some Trig functions in the calculation, I assume it isn't right. Or why some still live on flat Earth. -)

The problem there was the lack of specific requirements. If you don't know how currency conversion works then it is entirely reasonable to assume it works like any other conversions.

Here's my thing, though- this would be trivial to catch in testing. The fact that this was in the code and was in use implies that it was never actually tested.

Computing accurate distances on an ellipsoid's surface is quite hard:

A few years ago I tried to convert lan/lat-coordinates to x/y/z-cartesian ones and back. I was unable to find a non-iterative solution. Anyway, the geometric approximation converged quickly and seemed to be stable.

The main problem is, that lan/lat-values denote the surface's normal angle, rather than an angle measured from some cutting plane.

Depending on what accuracy you have to achieve, the haversine formula is definitively out of discussion. This one should yield better results:

Nevertheless: the code shown in the article just hurts the area in my brain that has to handle basic math.

That's true, this would definitely be easy to catch in testing, if you have a strong test suite of any kind. And this would likely be caught by any other engineer who looked at it. However, if no one was looking closely, and no one was was testing cases that were terribly close to the poles (most likely everyone was in North America anyway)

I will accept that TRWTF is that no one looked at the code closely enough to catch this before it got into production, or that someone trusted this person enough to have them implement a core/important part of a system without them knowing at all what they were doing.

Some kind of a "fun-fact" regarding practical implications of ellipsoids vs. spheroids:

Finish of the first paragraph: (most likely everyone was in NA anyway) then it likely would produce results at least somewhat close to the actual results. As long as you stay within one continent (that isn't Asia) you aren't going to get results more than a couple of miles off, and if you're just considering distance to the nearest NWS system, a couple of miles most likely doesn't make a huge difference.

How 'bout it's wrong because it's just that: wrong. I know nothing about properly calculating that distance, but the featured formula is showing details about only one end of the 2-point line, which just can't be enough to calculate that distance. The result of that formula probably doesn't have any meaning whatsoever.

Maybe he used to work for CityWide Bank:

"How do we make money? Simple. Volume."

That formula works if all the values are local, but if you're looking world wide, there are MANY problems. The biggest I can see is that lat/long values can be negative, but this treats them all as positive so points on opposite sides of the planet would be calculated as right next to each other.

"any number squared is positive, so you don't need abs()" .. Oh, yeah? What if the latitude is a complex value? What then, eh?

Put this in the same category as time calculations. It ain't easy! Things get even more complex when you want to calculate intersecting lines on the surface of the earth. Two "straight" lines can even intersect TWICE if you have them in the right place. A program I worked on (Airplane navigation) needed to know if something intersected or not. My first test was using simple squares based on Lon & Lat. I figured if this intersected (the common case was that it didn't) I would take the high road and calculate the true place using all sorts of spherical and the like coordinates. If not, then skip it. I made the assumption that the lines didn't intersect twice which for the most part was OK in the cases I was getting around to.

The bigger problems happen when you go from geocentric coordinates (measuring from the center of the earth) to/from geodesic coordinates (down "points" to the center of the earth, and oblate spheroid things). Gets plenty confusing when you go from one place to another and things change. Ain't GPS wonderful.

Even a bunch of CERN physicists got this wrong. Remember the excited atatements about neutrinos travelling faster than light? Quickly retracted after a high school student pointed out neutrinos tend to iust dig through earth instead of taking the surface roads. :-)

Note - Don't say "North America" when you mean "USA". Assuming the lower 48 is flat versus on a spheroid surface may not give you must deviation, but when you head north it really matters. Because in Canada it varies pretty significantly. Maybe not much when you're just going along the 49th parallel, but once you headed north a few hundred kilometers the error becomes big (and well, given the territory extends to the North Pole. ).

It's probably also a big deal in Alaska.

Um, this isn't even remotely correct, degrees do not convert to miles that way, and you are on a sphere, not a plane. A degree of latitude is a constant number of miles, but a degree of longitude is 25,000/360 only at the equator. You have to use a bit fancier formula to do real distances on a sphere and convert degrees to distance..

From what I recall, the super-luminal neutrino thing was actually a faulty optical fibre connection resulting in a degraded signal. Nothing to do with geographic distances.

@Smartass The calculation uses latitude difference (latdiff) and longitude difference (londiff). So both endpoints are used.

@Herby On a sphere, two "straight lines" always intersect twice (if you continue them long enough). That's because the analog of a straight line (shortest path) on a sphere is a great circle (circle with the same center and radius as the sphere), and any two of those intersect at antipodal points.

I worked with some severely WTFy software (it's worth a post in its own right, when I can repress the memories enough) that was written by a software company in Canada, and made a lot of assumptions based on those southerly latitudes. Anywhere in the UK, everything was really weirdly squashed. In Scotland the distortion was so severe that maps were drawn trapezoidally, and looked like sat nav "head up" maps.

Does londiff wrap at +/-180 degrees? If not, the international date line is going to be much more fun than the poles.

The international date line doesn't strictly follow the 180° meridian. There are detours in the Bering strait (so the line goes between Siberia and Alaska), eastward around the Kiribati (to avoid the IDL crossing the country) and eastward around some islands belonging to New Zealand (same reason). The problem with wrapping would occur across the meridian, regardless of where the IDL is (fortunately, there isn't much land around it). Also, if londiff is more than 180°, it should be replaced by 360° - londiff to avoid going the long way around.

At the wholesale level, the forward and reverse conversion rates are so close that, for many purposes, clients can and do use the reciprocal of the "buy" rate as the "sell" rate, or the reciprocal of the "sell" rate as the "buy" rate, or the "mid" rate as either the buy or sell rate.

I supplied treasury software for several years, and although our software supported buy, sell and mid rates, virtually all of our clients used only one rate.

(All clients always use the reciprocal of the transaction rate for some reports and projections, but that's a different requirement)

(most likely everyone was in North America anyway)

Non-sequitur. Some parts of North America are as far north as the UK, which was already mentioned as having 40% difference from a flat-Earther's calculation. Some parts of North America are even farther north, making the programmer a suitable target for a poleish joke.

Things get even more complex when you want to calculate intersecting lines on the surface of the earth.

It seems to me a line should intersect the surface of the earth twice, or possibly more depending on how it passes through mountains and valleys.

Fun happens with New Zealand or Taiwan that are moving fast enough on tectonic plates to make a difference when you calculate a distance to them from year to year.

From memory of oil survey navigation software, their map datums have a time variable in them too.

it would be enough to give delivery by GPS drone a bad name as it drops packages into the neighbour's pool rather than the front driveway.

For the UK or any similar-sized country, you can get results within one part in a thousand by converting from lat/long to Universal Transverse Mercator (or the Ordnance Survey GB projection for the UK), obtaining results in projected metres, then using Pythagoras on that. It's effectively the equivalent of measuring with a ruler on a topographic map.

Well, that conversion doesn't give equal weights to latitude and longitude (unless you're near the equator).

When working for a defense contractor in the US I once had this argument with my boss for several days. She was insistent that her formula was right. I never could quite convince her, but she did finally give in and let me use my own formula in calculating the distance. What's worse, we were testing the validity of a missile defense system and this formula was being used to test the size and shape of the 'error ellipse' (ie, area in which there was a xx% chance of the missile hitting). They'd been using the Euclidean formula for testing for over 3 years. :(

As a person with Asperger's syndrome, this is one of the things that piss me off the most about neurotypicals: their insistence that social rules (rank, protocol, etc.) trump the rules of mathematics.

Robert, that's utterly politically incorrect: you didn't grasp the concept of democracy at all. If the majority decides that 2+2=5, then 2+2=5. Period.

Android Powered Autonomous GPS Robot

Smartphones have now become extremely powerful with multicore processors, large storage capacities and very rich functionalities. Android can therefore be used as a cost effective and easy platform to be the brain for controlling the robotic device. The purpose of such systems is to provide powerful computational android platforms coupled with simpler robot’s hardware architecture. Existing systems use Bluetooth or Wifi modules for remote controlling the robot. The proposed project deals with making an autonomous GPS robot device with obstacle avoidance using android smartphone.

3.3 Project Cost and Time Estimation

3.5 Task & Responsibility Assignment Matrix

4.1 Software Architecture diagram

4.2 Architectural style and justification

4.3 Software Requirements Specification Document

5.2 Programming Languages used for Implementation

List of Tables

3.5 Assignment and Responsibility Matrix

6.4 Unit and Integrated System Test Cases

List of Figures

3.7 Project Timeline Chart

4.1 Software Architecture Diagram Phone Mirror Application Real time GPS Tracking Activity Canny Edge Detection Activity

5.1.4 Intensity Gradient of the image

5.1.5 Non-maximum suppression

Chapter 1: Introduction

1.1 Background

Android powers hundreds of millions of mobile devices in more than 190 countries around the world. Large storage capacities, richer functions and faster processing speeds are some of the features provided by android smartphone, that too almost inexpensively. Communication between android smartphones have now become much easier, all thanks to android development environment which provides an easier option for software engineers, by using Java and not needing to learn new programming languages. Android devices also provide an easy way to deal with hardware components. As a matter of interest for robotics, android provides communication interfaces for Bluetooth, Wi-Fi, USB and GPS. As software developers we are interested in developing applications through test based development. This project proposed developing an autonomous GPS robot using Android. This autonomous robot can be used for many applications such as delivery robot, surveillance bot, etc. A delivery system to safely deliver packages within 30 minutes using drones is under development by Amazon which will soon be provided to customers by April, 2017. This not only concerns with cost effectiveness in delivering products but also leads to faster delivery of product to customers. Surveillance also known as closed observation can be implemented in remote areas where there is a need for security. It deals with close observation on particular entity without putting anyone on the risk. This project is conceptualized on similar ideas.

1.2 Problem Definition

Existing systems use separate GPS, IR, and Bluetooth modules connected to Arduino board. However there are certain problems with such systems. The range of Bluetooth module, low accuracy of separate GPS module, multiple IR sensors, lower processing capacity of Arduino and most importantly separate cost of all of them together is very high. Another method that works on similar lines is using ESP8266 Wifi module instead of Bluetooth. This may increase the control range, but the robot isn’t autonomous. Also, the addition of this module leads to high power consumption, while on other hand android device can itself act as a power source for the

Our system proposes controlling a robot with android mobile devices using GPS and Wi-Fi. The GPS coordinates fed to the robot and robot/vehicle tracks its path. Wifi will be used for close control. The system involves 2 Android devices and they communicate with each other using Wifi Direct Communication. The robot will have obstacle detection as well as track its own path in accordance with the GPS coordinates fed into it. The commands will be given to the robot using Arduino board. There will smart phone-Arduino communication using USB OTG cable.

The project presents a mechanism of controlling a robot with android mobile devices using GPS and Wi-Fi. Using Global Positioning System (GPS), the coordinates can be fed to the robot, i.e. starting and final positions can be given as an instruction to the robot device. Wi-Fi will be used for sending commands to the robot, in case of close control. Using GPS, the robot can also measure the travelled distance. Overall system consists of 2 Android operating system based mobile phones and a robot. We can create an application that can run under Android operating system and control a mobile robot with Android device fixed on it. The two android devices communicate with each other using Wi-Fi direct. Wi-Fi Direct allows you to connect two devices over wireless without an access point. It is similar to Bluetooth with a much extended range and performance. The mobile robot should be able to move in internal as well as external environment without collision with obstacles. Other types of movements are identified using an agglomerative clustering technique. Android Smart phone fixed on the drone communicates with the Arduino by using USB OTG cable.

1.4 Organization of the Report

The following chapters have been included in the report:

  1. Introduction: The chapter justifies and highlights the problem posed, defines the topic and explains the aim along with the scope of the work presented.
  1. Literature Review: A critical appraisal of previous works published in the topic of interest has been presented. Various features required in the project studied from various range of papers previously published.

    Project Management Plan: The section portrays how the project development process was planned and managed.

  1. Feasibility analysis: The analysis of feasible of the project in terms of cost, technicality, software and hardware aspects.
  1. Project cost and time estimation: The cost of the hardware required and other utilities decided and an estimation of time allocation done for implementing features.
  1. Resource plan: A general review of resources required and a plan for their usage presented in the topic.
  1. Task and Responsibility Assignment: A the topic consists of a table that depicts how tasks and responsibilities were assigned to each member of the group.
  1. Project Analysis and Design: This section gives an overview of the designing phase of the application.
  1. Architectural style and justification: The topic presents a justification of the architectural style used and modules included in the architecture diagram.
  1. Software Requirements Specification Document: The document containing functional and non-functional requirements, resource requirements, hardware and software requirements, etc. has been attached here. Various use case diagrams like Class diagram, use case diagram, State diagram, DFDs have been explained here.
  1. Software design document: Contains the User Interface design and component diagram explaining the software design of the project.

    Project Implementation: This section gives an idea of how the application was developed and executed.

  1. Approach / Main Algorithm: A description of the ‘Pothole detection algorithm’ and ‘Obstacle detection algorithm’ given in this topic.
  1. Programming Languages used for Implementation: A list of programming languages used for various purposes has been presented in this topic.
  1. Tools used: A list of various tools and hardware components used during the implementation of the project presented here.

6. Integration and Testing: Once the application has been developed, it needs to betested for errors and bugs. This section describes the testing approached followed during the testing process.

6.1 Testing Approach: The methodology used for testing each module is discussed in this section.

6.2 Test Plan: This gives an idea about the testing procedure carried out and tasks involved in this project.

6.3 Unit Test Cases: The outputs of various modules on testing individuallyhave been discussed.

6.4 Integrated System Test Cases: The output of the entire system as a wholewith all modules functioning have been discussed.

7. Conclusion and Future work: Provide the possible future features and works thatcan been implemented on the existing system.

  1. References: This section provides references to all the websites, books, and journals, papers referred during the analysis, planning and implementation phases of the project.

Chapter 2: Literature Review

GPS location based path generation:

The paper discusses the use of Haversine- Bearing formula to guide robot once we get the set of latitude and longitude to be covered in the path. Haversine Formula calculates geographic distance on earth. Using two latitude-longitude values, we can easily calculate the spherical distance between the points by Haversine formula. Haversine formula is used to break path into straight lines.

Bearing can be defined as direction or an angle, between the north-south line of earth or meridian and the line connecting the target and the reference point. While Heading is an angle or direction where you are currently navigating in. Bearing value is the angle from the heading value we need to take to align the robot in the right direction align the path from point A to B. It is measured from the North direction.

Obstacle Detection and Avoidance:

The paper discusses the use of Canny Edge detection algorithm for detecting the obstacles/objects. Canny Edge algorithm, developed by J.F. Canny, is one of most popular edge detection algorithm. It involves real time image processing with stages like Noise reduction, calculation of the intensity gradient of the image, non-maximum suppression and Hysteresis.

The result of all these stages is subjected to side fill, smooth Hull and erode operations to get a target point to which the robot should move avoiding all the objects/obstacles in between. The use of ultrasonic sensors is also advised since computer vision algorithms using smartphones camera will not be able to detect transparent obstacles.

24-hour GPS tracking in Android Operating System:

The paper discusses the need of real time GPS tracking of the Android device because of many reasons such as security and controlling the activities of the user in a certain area. Real time tracking is possible by the usage of overlay class and set of geo-points. The use of GPS tracking will be helpful in this project in case of robot breakdown where in the user/controller will be notified about the current location of the robot via SMS.

Application mentioned in the paper will always be in running state at the background once it is started. It is built on top of SMS, so that once application is installed on mobile, all SMS related activities are by default performed by application. The user will be notified by the devices current location along with an associated timestamp.

Chapter 3: Project Management Plan

3.1 Feasibility Analysis

The proposed system consists of two smartphones, one L293D motor driver connected to Arduino and a chassis which acts as base frame. The frequency of motors required for motion is 60 RPM which requires approximately 18V of battery power which after amplification by L293D is used by motors to perform the motion. Arduino and chassis are readily available in market at affordable rates which makes system economically and technically feasible.

In today’s world everyone is assumed to have smartphone. Hence from user’s point of view the cost of project is almost zero (considering they have smartphone). From developers, point of view the cost of project is approximately 1000 rupees (not considering the cost of smartphone placed on robotic car). These factors make project economically feasible.

The existing systems use separate GPS, IR, and Bluetooth modules connected to Arduino board. However there are certain problems with such systems. The range of Bluetooth module, low accuracy of separate GPS module, multiple IR sensors, lower processing capacity of Arduino and most importantly separate cost of all of them together is very high. Another method that works on similar lines is using ESP8266 Wifi module instead of Bluetooth. This may increase the control range, but the robot isn’t autonomous. Also, the addition of this module leads to high power consumption, while on other hand android device can itself act as a power source for the Arduino.

The entire project is implemented in Arduino Uno and Android. Both are open source and thus a lot of updates and improvements can be provided thereby making it optimal and much more feasible in terms of software.

Our system can be easily made using an android phone and is economically feasible, so users can install the application required for controlling the robot. This system can be used for educational purposes, and it can also be scaled into a delivery robot, like that of Amazon or even a rescue bot in case of natural disasters.

3.2 Lifecycle Model

First Iteration:

Analysis & Design: Various methods used in the existing systems are studied and the best option or method is chosen.

Implementation: Path Generation using markers is implemented in Android using Android Studio and Google APIs.

Testing: Path Generation is tested.

Second Iteration:

Analysis & Design: Study for improvement of Path Generation and communication between smartphone and Arduino is planned.

Implementation: Implementation of communication between smartphone and Arduino is implemented.

Testing: Communication between Arduino and Android device is tested.

Third Iteration:

Analysis & Design: Study of Obstacle detection using Canny Edge algorithm involving Computer vision. Addition of sensors as modification also studied.

Implementation: Implementation of Obstacle Detection.

Testing: Obstacle detection implementation is tested.

Fourth Iteration:

Analysis & Design: Close control of the robot using Android smartphone is planned.

Implementation: Implementation of close control using Android Studio and Arduino SDK with the usage of buttons.

Testing: Testing whether close control implementation was successful or not.

Fifth Iteration:

Analysis & Design: Plan GPS Navigation and Obstacle Avoidance.

Implementation: Implementation of GPS Navigation and Obstacle Avoidance is done using OpenCV libraries and Arduino SDK.

Testing: Testing GPS Navigation and Obstacle Avoidance.

3.3 Project Cost and Time Estimation

Particulars Cost per piece Cost(in Rupees)
Chassis for Bot 75 1*75= 75
Arduino Kit 400 1*400= 400
Wheels 20 4*20= 80
DC Motors 65 2*65= 130
Total Cost 865

The following table shows the time required to complete the project:

The following resources are required and used:

Human Resources

Task Name Start date End Date Status
Analysis of existing systems 03/07/16 11/07/16 Completed
Determination of resources 12/07/16 19/07/16 Completed
Design of the project 20/07/16 24/07/16 Completed
Implementation of Path Generation 25/07/16 31/07/16 Completed
Implementation of communication between Arduino and Android smartphone 05/08/16 18/09/16 Completed
Implementation of phone mirroring 19/09/16 20/10/16 Completed
Implementation of Canny Edge Detection. Addition of Sensors studied. 20/10/16 28/11/16 Completed
Implementation of Close Control 04/12/16 02/01/17 Completed
Testing of Close Control 05/01/17 09/01/17 Completed
Implementation of GPS Navigation 15/01/17 24/02/17 Partially Completed
Implementation of Obstacle Avoidance 20/01/17 25/02/17 Partially Completed
Integration 26/02/17 02/03/17 Partially Completed
Testing 05/03/17 15/03/17 Completed

Chapter 4: Project Analysis and Design

4.1 Software Architecture Diagram

4.2 Architectural style and justification

The architecture used is layered style. Three layers are used, Android with user, Android smartphone on robotic car and the Arduino. There are various functions in Android. Based on functions selection, appropriate procedure call is made.

First layer i.e. Android based smartphone in user’s hand which is used to feed destination location and send the coordinates to smartphone residing on robotic car.

Second layer is Android based smartphone installed on robot. This layer is most important layer as it communicates with Arduino board for perform motion. Also, it performs the operation of applying Haversine- Bearing formula which is breaking path into multiple straight line paths and then following the direction of destination. Obstacle detection using Canny Edge algorithm is also performed by this layer.

The third layer consists of Arduino which collects the signals from smartphone on robot to perform movements. These signals are sent to L293D motor driver which amplifies the current accordingly and rotate the motors.

4.3 Software Requirements Specification Document


4.3.1 Product Overview:

The proposed product/system is a robot having smartphone placed on a robotic car which communicates with the Arduino board which in turn assists L293D to control the rotation of motors and thus navigate. The destination coordinates are sent by user from smartphone in his hand to smartphone on robot using phone mirroring.

The user smartphone will be installed with a phone mirroring application using which the user is able to mirror the android device screen installed on the robot and will be able to feed in the destination coordinates, view the camera feed.

These destination coordinates will be processed by the smartphone, a path will be generated. In order to navigate to the destination, commands will be given by the smartphone to Arduino which then will instruct the motor driver accordingly. The robot would also perform obstacle avoidance if there are any objects/obstacles in its path.

External Interface Requirements:

4.3.2 User Interface Requirements:

Since the proposed project involves two android smartphones, therefore there are two user interfaces- one for user application and the other for smartphone installed on the robot. The user application is a phone mirroring application and just needs to connect to the smartphone which is installed on the robot.

The smartphone on the robot has “OnRobot” application, which will be mirrored to the user, using which the user can submit the GPS coordinates. The user can also closely control the robot by the arrow control buttons provided in the UI. The UI also has an option of path generation using Google maps which provided a sophisticated visual representation.

4.3.3 Hardware Interface requirements

The hardware requirements for the project include the following:

  • Chassis and Wheels: Acts as a base for the robot. Four wheels are attached to this chassis. Two wheels are attached to DC motors.
  • Arduino Uno: Arduino UNO microcontroller board acts as brain of the robot. Arduino communicates and acts as a bridge between Android device and the motor driver.
  • L293D Motor Driver: The L293D motor driver is responsible for controlling the DC motor movement and is controlled by the Arduino signals. The commands for movements are fed into Arduino board via smartphone which then in coordination with motor driver performs the movement.
  • Li-ion Batteries: These are required as a power supply for the L293D and DC motors. Total amount of the power required is 15V. However, for Arduino, the Android device acts as source of power.

Software Interface Requirements:

4.3.4 Software Product Features

The software is divided into various parts which are as follows:

The user/controller can utilize this application for surveillance purpose, for example, in case of search-bots used during calamities like earthquakes. Hence in our project, a client-server based mirroring is implemented. The user is able to mirror the android device screen installed on the robot and will be able to feed in the destination coordinates, view the camera feed.

Haversine- Bearing formula is used to guide robot once we get the set of latitude and longitude to be covered in the path.

Haversine Formula- Haversine Formula calculates geographic distance on earth. Using two latitude-longitude values, we can easily calculate the spherical distance between the points by Haversine formula. Haversine formula is used to break path into straight lines.

Bearing Formula- Bearing can be defined as direction or an angle, between the north-south line of earth or meridian and the line connecting the target and the reference point. While Heading is an angle or direction where you are currently navigating in. Bearing value is the angle from the heading value we need to take to align the robot in the right direction align the path from point A to B. It is measured from the North direction.

Canny Edge algorithm, developed by J.F. Canny, is one of most popular edge detection algorithm. It involves real time image processing with stages like Noise reduction, calculation of the intensity gradient of the image, non-maximum suppression and Hysteresis. The result of all these stages is subjected to side fill, smooth Hull and erode operations to get a target point to which the robot should move avoiding all the objects/obstacles in between. The use of ultrasonic sensors is also advised since computer vision algorithms using smartphones camera will not be able to detect transparent obstacles. Canny edge detection is implemented in Android using OpenCV (Open-source Computer Vision) library. OpenCV puts all the above in single function

Software System Attributes

4.3.5 Reliability

System is reliable as communication between smartphones is through phone mirroring which implies only those device can be connected to robot which have the IP address of the smartphone on robot.

4.3.6 Availability

The software is available as required and provides correct information to person. There is no database required.

4.3.7 Portability

The proposed system is portable as it a small robotic car built on chassis of approximately 25 centimeter in length. Any updates to be required in hardware or software can be done with ease.

4.3.8 Performance

The overall performance has passed most test cases. Only one user can operate at a time. There is no pressure of peak workload condition unless minimum requirements RAM and memory are not met.

4.3.9 UML Diagrams:

II. Data Flow Diagram for Obstacle Detection and Path Generation:

4.4 Software Design Document

4.4.1 UI Design

There are two android based applications used in this project, one is the phone mirroring application which is Wifi based and other is the OnRobot application for communication with the Arduino board.

  • Phone Mirror Application: The server side creates a wifi hotspot while the user connects to the server by entering the server’s IP address.

    OnRobot Application: OnRobot application runs on the Android device installed on the robot. By using the phone mirroring application, the destination latitude and longitude are entered and submitted. For close control, the arrow buttons are provided. The capture button is used for real time surveillance. The track button is for user to mark his destination using markers on the map.

  • GPS Path Generation Activity: By clicking on the map, markers appear which indicates the start and destination point, and corresponding path is generated. New destination marker can be directly set on the map.

    Real time GPS Tracking Activity: This activity keeps track of real time location of the robot along with the timestamp. The latitude, longitude and timestamp are stored in a file. It keeps track whether the robot is not stationary for more than the threshold time. If it is greater than the threshold time, the controller is notified about the location of robot by SMS.

10 Secret Trig Functions Your Math Teachers Never Taught You

On Monday, the Onion reported that the "Nation's math teachers introduce 27 new trig functions." It's a funny read. The gamsin, negtan, and cosvnx from the Onion article are fictional, but the piece has a kernel of truth: there are 10 secret trig functions you've never heard of, and they have delightful names like "haversine" and "exsecant."

A diagram with a unit circle and more trig functions than you can shake a stick at. (It's well known that you can shake a stick at a maximum of 8 trig functions.) The familiar sine, cosine, and tangent are in red, blue, and, well, tan, respectively. The versine is in green next to the cosine, and the exsecant is in pink to the right of the versine. Excosecant and coversine are also in the image. Not pictured: vercosine, covercosine, and haver-anything. Image: Limaner and Steven G. Johnson, via Wikimedia Commons.

Whether you want to torture students with them or drop them into conversation to make yourself sound erudite and/or insufferable, here are the definitions of all the "lost trig functions" I found in my exhaustive research of original historical texts Wikipedia told me about.

Versine: versin(&theta)=1-cos(&theta)
Vercosine: vercosin(&theta)=1+cos(&theta)
Coversine: coversin(&theta)=1-sin(&theta)
Covercosine: covercosine(&theta)=1+sin(&theta)
Haversine: haversin(&theta)=versin(&theta)/2
Havercosine: havercosin(&theta)=vercosin(&theta)/2
Hacoversine: hacoversin(&theta)=coversin(&theta)/2
Hacovercosine: hacovercosin(&theta)=covercosin(&theta)/2
Exsecant: exsec(&theta)=sec(&theta)-1
Excosecant: excsc(&theta)=csc(&theta)-1

I must admit I was a bit disappointed when I looked these up. They're all just simple combinations of dear old sine and cosine. Why did they even get names?! From a time and place where I can sit on my couch and find the sine of any angle correct to 100 decimal places nearly instantaneously using an online calculator, the versine is unnecessary. But these seemingly superfluous functions filled needs in a pre-calculator world.

Numberphile recently posted a video about Log Tables, which explains how people used logarithms to multiply big numbers in the dark pre-calculator days. First, a refresher on logarithms. The equation logbx=y means that b y =x. For example, 10 2 =100 so log10100=2. One handy fact about logarithms is that logb(c×d)=logbc+logbd. In other words, logarithms make multiplication into addition. If you wanted to multiply two numbers together using a log table, you would look up the logarithm of both numbers and then add the logarithms together. Then you'd use your log table to find out which number had that logarithm, and that was your answer. It sounds cumbersome now, but doing multiplication by hand requires a lot more operations than addition does. When each operation takes a nontrivial amount of time (and is prone to a nontrivial amount of error), a procedure that lets you convert multiplication into addition is a real time-saver, and it can help increase accuracy.

The secret trig functions, like logarithms, made computations easier. Versine and haversine were used the most often. Near the angle &theta=0, cos(&theta) is very close to 1. If you were doing a computation that had 1-cos(&theta) in it, your computation might be ruined if your cosine table didn't have enough significant figures. To illustrate, the cosine of 5 degrees is 0.996194698, and the cosine of 1 degree is 0.999847695. The difference cos(1°)-cos(5°) is 0.003652997. If you had three significant figures in your cosine table, you would only get 1 significant figure of precision in your answer, due to the leading zeroes in the difference. And a table with only three significant figures of precision would not be able to distinguish between 0 degree and 1 degree angles. In many cases, this wouldn't matter, but it could be a problem if the errors built up over the course of a computation.

The bonus trig functions also have the advantage that they are never negative. Versine ranges between 0 and 2, so if you are using log tables to multiply with a versine, you don't have to worry about the fact that the logarithm is not defined for negative numbers. (It is not defined for 0 either, but that is an easy case to deal with.) Another advantage to the versine and haversine is that they can keep you from having to square something. A little bit of trigonometric wizardry (a.k.a. memorization of one of the endless list of trig formulas you learned in high school) shows that 1-cos(&theta)=2sin 2 (&theta/2). So the haversine is just sin 2 (&theta/2). Likewise, the havercosine is cos 2 (&theta/2). If you have a computation involving the square of sine or cosine, you can use a haversine or havercosine table and not have to square or take square roots.

A diagram showing the sine, cosine, and versine of an angle. Image: Qef and Steven G. Johnson, via Wikimedia Commons.

The versine is a fairly obvious trig function to define and seems to have been used as far back as 400 CE in India. But the haversine may have been more important in more recent history, when it was used in navigation. The haversine formula is a very accurate way of computing distances between two points on the surface of a sphere using the latitude and longitude of the two points. The haversine formula is a re-formulation of the spherical law of cosines, but the formulation in terms of haversines is more useful for small angles and distances. (On the other hand, the haversine formula does not do a very good job with angles that are close to 90 degrees, but the spherical law of cosines handles those well.) The haversine formula could yield accurate results without requiring the computationally expensive operations of squares and square roots. As recently as 1984, the amateur astronomy magazine Sky & Telescope was singing the praises of the haversine formula, which is not only useful for terrestrial navigation but also for celestial calculations. For more on the haversine formula and computing distances on a sphere, check out this archived copy of a census bureau page or this Ask Dr. Math article.

I don't have much information about the history of the other trig functions on the list. All of them could make computations more accurate near certain angles, but I don't know which ones were commonly used and which ones were named* analogously to other functions but rarely actually used. I'm curious about this, if anyone knows more about the subject.

When the Onion imitates real life, it's usually tragic. But in the case of secret trig functions, the kernel of truth in the Onion didn't make me sad. We're very lucky now that we can multiply, square, and take square roots so easily, and our calculators can store precise information about the sines, cosines, and tangents of angles, but before we could do that, we figured out a work-around in the form of a ridiculous number of trig functions. It's easy forget that the people who defined them were not sadistic math teachers who want people to memorize weird functions for no reason. These functions actually made computations quicker and less error-prone. Now that computers are so powerful, the haversine has gone the way of the floppy disc. But I think we can all agree that it should come back, if only for the "awesome" joke I came up with as I was falling asleep last night: Haversine? I don't even know 'er!

*I'd like to take a little digression to the world of mathematical prefixes here, but it might not be for everyone. You've been warned.

In the table of secret trig functions, "ha" clearly means half the value of haversine is half of the value of versine, for example. "Co" means taking the same function but with the complementary angle. (Complementary angles add up to 90 degrees. In a right triangle, the two non-right angles are complementary.) For instance, the cosine of an angle is also the sine of the complementary angle. Likewise, the coversine is the versine of the complementary angle, as you can see in light blue above one of the red sines in the diagram at the top of the post.

The one bonus trig function that confuses me a little bit is the vercosine. If the "co" in that definition meant the complementary angle, then vercosine would be the same as coversine, which it isn't. Instead, the vercosine is the versine of the supplementary angle (supplementary angles add up to 180 degrees), not the complementary one. In addition to the definitions as 1-cos(&theta) and 1+cos(&theta), the versine and vercosine can be defined as versin(&theta)=2sin 2 (&theta/2) and vercos(&theta)=2cos 2 (&theta/2). In the case of versine, I believe the definition involving cos(&theta) is older than the definition involving sine squared. My guess is that vercosine was a later term, an analogy of the square of sine definition of versine using cosine instead. If you're a trigonometry history buff and you have more information, please let me know! In any case, the table of super-secret bonus trig functions is a fun exercise in figuring out what prefixes mean.

The views expressed are those of the author(s) and are not necessarily those of Scientific American.

We acknowledge support from the German Research Foundation (DFG) and the Open Access Publication Fund of Charité–Universitätsmedizin Berlin.

1. Powers WJ, Rabinstein AA, Ackerson T, Adeoye OM, Bambakidis NC, Becker K, et al. 2018 guidelines for the early management of patients with acute ischemic stroke: a guideline for healthcare professionals from the American Heart Association/American Stroke Association. Stroke. (2018) 49:e46�. doi: 10.1161/STR.0000000000000158

2. Emberson J, Lees KR, Lyden P, Blackwell L, Albers G, Bluhmki E, et al. Effect of treatment delay, age, and stroke severity on the effects of intravenous thrombolysis with alteplase for acute ischaemic stroke: a meta-analysis of individual patient data from randomised trials. Lancet. (2014) 384:1929�. doi: 10.1016/S0140-6736(14)60584-5

3. Meretoja A, Keshtkaran M, Saver JL, Tatlisumak T, Parsons MW, Kaste M, et al. Stroke thrombolysis: save a minute, save a day. Stroke. (2014) 45:1053𠄸. doi: 10.1161/STROKEAHA.113.002910

4. Kunz A, Nolte CH, Erdur H, Fiebach JB, Geisler F, Rozanski M, et al. Effects of ultraearly intravenous thrombolysis on outcomes in ischemic stroke: the STEMO (Stroke Emergency Mobile) group. Circulation. (2017) 135:1765𠄷. doi: 10.1161/CIRCULATIONAHA.117.027693

5. Saver JL, Goyal M, van der Lugt A, Menon BK, Majoie CB, Dippel DW, et al. Time to treatment with endovascular thrombectomy and outcomes from ischemic stroke: a meta-analysis. JAMA. (2016) 316:1279�. doi: 10.1001/jama.2016.13647

6. Mulder M, Jansen IGH, Goldhoorn RB, Venema E, Chalos V, Compagne KCJ, et al. Time to endovascular treatment and outcome in acute ischemic stroke: MR clean registry results. Circulation. (2018) 138:232�. doi: 10.1161/CIRCULATIONAHA.117.032600

7. Meretoja A, Keshtkaran M, Tatlisumak T, Donnan GA, Churilov L. Endovascular therapy for ischemic stroke: save a minute-save a week. Neurology. (2017) 88:2123𠄷. doi: 10.1212/WNL.0000000000003981

8. Kim DH, Nah HW, Park HS, Choi JH, Kang MJ, Huh JT, et al. Impact of prehospital intervention on delay time to thrombolytic therapy in a stroke center with a systemized stroke code program. J Stroke Cerebrovasc Dis. (2016) 25:1665�. doi: 10.1016/j.jstrokecerebrovasdis.2016.02.011

9. Ebinger M, Winter B, Wendt M, Weber JE, Waldschmidt C, Rozanski M, et al. Effect of the use of ambulance-based thrombolysis on time to thrombolysis in acute ischemic stroke: a randomized clinical trial. JAMA. (2014) 311:1622�. doi: 10.1001/jama.2014.2850

10. Muller-Nordhorn J, Wegscheider K, Nolte CH, Jungehulsing GJ, Rossnagel K, Reich A, et al. Population-based intervention to reduce prehospital delays in patients with cerebrovascular events. Arch Intern Med. (2009) 169:1484�. doi: 10.1001/archinternmed.2009.232

11. Meretoja A, Strbian D, Mustanoja S, Tatlisumak T, Lindsberg PJ, Kaste M. Reducing in-hospital delay to 20 minutes in stroke thrombolysis. Neurology. (2012) 79:306�. doi: 10.1212/WNL.0b013e31825d6011

12. Nolte CH, Malzahn U, Kuhnle Y, Ploner CJ, Muller-Nordhorn J, Mockel M. Improvement of door-to-imaging time in acute stroke patients by implementation of an all-points alarm. J Stroke Cerebrovasc Dis. (2013) 22:149�. doi: 10.1016/j.jstrokecerebrovasdis.2011.07.004

13. Caplan LR. Primary stroke centers vs comprehensive stroke centers with interventional capabilities: which is better for a patient with suspected stroke? JAMA Neurol. (2017) 74:504𠄶. doi: 10.1001/jamaneurol.2017.0006

14. Smith EE, Kent DM, Bulsara KR, Leung LY, Lichtman JH, Reeves MJ, et al. Accuracy of prediction instruments for diagnosing large vessel occlusion in individuals with suspected stroke: a systematic review for the 2018 guidelines for the early management of patients with acute ischemic stroke. Stroke. (2018) 49:e111�. doi: 10.1161/STR.0000000000000160

15. Schlemm L, Ebinger M, Nolte CH, Endres M. Impact of prehospital triage scales to detect large vessel occlusion on resource utilization and time to treatment. Stroke. (2018) 49:439�. doi: 10.1161/STROKEAHA.117.019431

16. Holodinsky JK, Williamson TS, Kamal N, Mayank D, Hill MD, Goyal M. Drip and ship versus direct to comprehensive stroke center: conditional probability modeling. Stroke. (2017) 48:233𠄸. doi: 10.1161/STROKEAHA.116.014306

17. Katz BS, Adeoye O, Sucharew H, Broderick JP, McMullan J, Khatri P, et al. Estimated impact of emergency medical service triage of stroke patients on comprehensive stroke centers: an urban population-based study. Stroke. (2017) 48:2164�. doi: 10.1161/STROKEAHA.116.015971

18. MATLAB and Statistics Toolbox Release 2013a. Natick, MA: The MathWorks, Inc. (2013).

19. Milne MS, Holodinsky JK, Hill MD, Nygren A, Qiu C, Goyal M, et al. Drip 'n ship versus mothership for endovascular treatment: modeling the best transportation options for optimal outcomes. Stroke. (2017) 48:791𠄴. doi: 10.1161/STROKEAHA.116.015321

20. Schlemm E, Ebinger M, Nolte CH, Endres M, Schlemm L. Optimal transport destination for ischemic stroke patients with unknown vessel status: use of prehospital triage scores. Stroke. (2017) 48:2184�. doi: 10.1161/STROKEAHA.117.017281

21. McTaggart RA, Moldovan K, Oliver LA, Dibiasio EL, Baird GL, Hemendinger ML, et al. Door-in-door-out time at primary stroke centers may predict outcome for emergent large vessel occlusion patients. Stroke. (2018) 49:2969�. doi: 10.1161/STROKEAHA.118.021936

22. Ng FC, Low E, Andrew E, Smith K, Campbell BCV, Hand PJ, et al. Deconstruction of interhospital transfer workflow in large vessel occlusion: real-world data in the thrombectomy era. Stroke. (2017) 48:1976𠄹. doi: 10.1161/STROKEAHA.117.017235

Keywords: pre-hospital triage, decision analysis, ischemic stroke, thrombolysis, mechanical thrombectomy, health services research, geospatial modeling, mathematical modeling (medical)

Citation: Schlemm L, Schlemm E, Nolte CH and Endres M (2019) Pre-hospital Triage of Acute Ischemic Stroke Patients—Importance of Considering More Than Two Transport Options. Front. Neurol. 10:437. doi: 10.3389/fneur.2019.00437

Received: 24 January 2019 Accepted: 10 April 2019
Published: 26 April 2019.

Nawaf Yassi, The University of Melbourne, Australia
Hamed Asadi, Florey Institute of Neuroscience and Mental Health, Australia

Copyright © 2019 Schlemm, Schlemm, Nolte and Endres. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

Semi Automted IoT Vehicles

AbstractDeveloping countries face the problem of crowded and congested roads because of inefficient implementation of traffic rules.There is a lack of awareness about the rules to the commutator. In addition, Drivers also tend to ignore the rules often. An intelligent traffic system that can automatically implement the traffic and road safety rules to the IOT based cars is proposed in this paper. The system proposed first identifies all the road safety rules to be implemented in the current location, followed by alerting to the driver. In case of no response by the driver in the way of following the rule, an automated imposition of rules is done. The recognition of the rules is done by sensing the current location through GPS and searching the Virtual Traffic Management System for all the rules that are applicable in that region which are predefined. Upon having known all the rules to be followed, the driver is alerted about the rules. Following which, the control signals are sent to control system to orient the functioning along the lines of following the traffic rules thereby automating the response directly by the vehicles.

KeywordsIntelligent transportation system, Web services, Speed limit, Road safety rules, IoT, Haversine algorithm, MySQL, Cloud service, Global Positioning System (GPS), Raspberry Pi.

Trac and road safety rules have been developed to promote trac safety and eciency.Developing countries face the problem of crowded and congested roads because of the inecient implementation of trac rules. There is a lack of awareness about the rules to the commutator. In addition,

Drivers also tend to ignore the rules often. Also, it is particularly challenging to new drivers operating on unfamiliar roads to follow rules. This is one of the main reasons for accidents.Hence, it is necessary to develop a

system that assists the driver about the trac or road safety rules while steering and to automate the response of the automobile upon the drivers ignorance to follow the rules. The major concern of trac sense system is to ensure road safety to avoid accidents due to drivers negligence resulting in losses of lives. Thus, an intelligent trac system that is

proposed is advantageous because of its unique features and capabilities of the Internet of Things (IoT).

The problem this project intends to solve is to overcome all the hassles caused by either lack of awareness of the driver towards the road safety rules or their deliberate negligence towards it. The problem statement is to propose an intelligent trac system that rst identies the trac rules ahead from the current location of the vehicle to alert the driver about the road safety rules and upon drivers ignorance to follow the rules, to automate the response of the automobile.

Jessen Joseph Leo uses GPS technology to nd the location of the vehicle with respect to the hairpin bend to decide the priority in which vehicles have to move in[1], but this system puts no eort in solving the problems of trac congestion in other areas. A model by Benoit Vanholmeprovides real-time advisory alerts to the driver when approaching mapped points of interest such as speed-limited zones, intersections and speed bumps. Here determination of drivers approaching POIs is done using the K-Nearest Neighboralgorithm in [2], but the K- nearest neighboring algorithm requires to calculate a parameter K that is dicult. The legal safety rules for the autonomous vehicle such as distance keeping, intelligent speed adaptation, and lane changing are implemented using cameras and radar in [3]. An android-based application collects data from a vehicle such as latitude, longitude, speed, and accelerometer data and sends it to its nearest fog server for processing. Algorithms were developed which nd the location of road anomalies and accident-prone area. The result of this is shown in Google map of drivers smartphone in [4].The concept of IoT and Cloud based technology is used in car parking services in [5], but this only identies whether the given zone is a parking or not and no other rules are considered. In [6], RFID technology is used for guiding the emergency vehicles with reduced congestion path. The IoT based car by SaliqAfaqueis remotely controlled through internet by a web browser. The RPi camera captures the live video and RPi transmits this video for live processing over the web browser in [7],but only the forward movement of

the car is taken care and the dark vision(i.e., night time) using RPi camera is dicult.In [8], automated vehicles communicates with other automated vehicles using the pure ALOHA system. In [9], the vehicle is embedded with the GPS, accelerometer sensors, crash sensor vehicle chip and sim module for communication. The information collected using these sensors is updated to the cloud at denite intervals. In [10], the system designed uses GMM/GMR acceleration model in the curve entry section and the curve exit section and the velocity tracking algorithm in the mid curve. The GMM/GMR method considers local automatic driving behavior based on time series information. Eye blink sensors and over speed controller sensors are used for speed control mechanism in [11], but detection is not early and blink sensors are not very precise which can lead to accident. In [12], the system provides priorities to the vehicles at the curves and controls the speed based on the type of the bend and the amount of bend to be taken. The location of the vehicle is matched with the maps to nd if the vehicle is nearing the bend. In [13], a mobile application is developed which is connected to the cloud designed for the user to know about the availability of parking spaces, but has ambiguity in cloud computing when many users uses the application at a time. The system which analyses the drivers curve driving data and considers the road construction by JianmingXie can be used for road condition management [14]. In [15], a smart car is structured with a DC motor for running driver and a steer motor for turning control, which can run along a black line at a high speed. A control strategy for smart car turning is based on the road visual, through the image of road, the smart car can track along the road and the running speed is adjusted according to the road shape. Here, the control strategy based on road shape is established mainly depending on image process which is not always efficient.In [16 and 17], the system designed enables the user to preserve parking slot from remote place with the help of mobile application. Authentication of the valid booking is incorporated to benet valid user. This system is implemented using low-cost IR sensors, Raspberry-Pi model 3b for real-time data collection and E-Parking mobile application.

The main objectives of this project is to develop a real time driver assistance system which assists the drivers by informing about the rules ahead and in case of no response by the driver,the system actuatesthe response.The system can performs the following functions:

Control the speed of the vehicle in accordance with the speed limit by using DC geared motor in the speed limit zone.

Notify the driver in case of speed breaker ahead.

isable the horn system of the vehicle in case of no horn area.

Notify the driver in case of no parking area.

Notify the driver about other signboard rules ahead such as school zone, one-way, deep left/right curvatures etc.

The proposed model centers on the elements of automated vehicles, which comprises of automating the vehicle based on the traffic rules and regulations when the vehicle approaches a point where the rules are defined. This denes how the ow of processes in the working of the model is designed starting from the user inputs to all the computations in the local device owed by storage in the database present in the server which is hosted in the cloud and to the response from the system back to the user. The model comprises of 2 modules namely, the Policy makers site and the Automobilists site.

This is the interface for the policy makers (rule makers) which will be the part of road safety department to formulate and deploy the set of rules corresponding to geographical locations. A GUI is designed for road safety department to input and update road safety rules by policy makers. The road rules are recorded in terms of its geographic location i.e., its latitude point and longitude point, type of the rule, for how much distance the rule is to be implemented from the initial coordinates. This data is used to create database of road safety rules of a particular geographic location using MySQL. MySQL database of road safety rules is uploaded to the server in cloud which forms the Virtual Trac Management System. The policy maker is authenticated with login credentials to access the Virtual Traffic Management System. The policy maker is provided with privileges to certain functions like add new rules with new coordinates, dene new rules at already present coordinates, define new coordinates for already present rules and delete the existing rules.

The owchart depicted in figure 1 provides the detailed working of the policy makers side. The Graphic User Interface provide the authorised trac management unit, a web page access to perform the above-mentioned privileges.

In figure 2, the owof how the policy makers and the user site is built and how the rules are stored in the database is explained.A process owis a list of event steps which denes theinteractions between a role and a system, to achieve a goal.The actor can be a human or other external system.

Figure 1: Flowchart for policy makers site of the model

Use case analysis is an important and valuable requirement analysis technique that has been widely used in modern software engineering. The actor or admin (Policy maker) logins to the webpage where he can enter the details of the rules to the cloud. These entered values are stored in the cloud and fetched whenever it is needed. The stored values are speed limited values, parking mode area details, hump area details, no horn zones at a particular location. Only the authenticated policy maker has control over speed limitations location, no horn zones, parking zones and on the data entry of the location in the database.

Figure 2: Use Case for the model

This module is present at the user end providing the users with option to switch between the two modes of operation i.e., the Driving mode and the Parking mode. A Touch screen display is provided to the vehicle driver to switch between the two modes. The owchart depicted in gure 3 provides the detailed working at the Automobilists site.

When the driver switches to driving mode, GPS module senses the live location of the vehicle and the live location coordinates are sent to Raspberry Pi dynamically. Raspberry Pi is the processor for the model where the cloud access program runs to access the Virtual Trac Management System for all the trac rules that are applicable in that area which are predened.

Using the HAVERSINE distance algorithm, an additional column is created in the table which holds the distance between the present coordinates of the vehicle and coordinates of the corresponding rule in the virtual trac management system. The implementation of Haversine distance algorithm is explained further in this chapter. The nearest five rules are retrieved from the cloud and are processed. Upon having known about the nearest rule to be followed, the driver is alerted about the rule through a speech or/and text on touch screen display. In addition, the control

signals are sent to the Raspberry Pi, which in turn sends control signals to the interface drives to automate the nearest rule on the vehicle along the lines of trac rules.

Figure 3: Flowchart for automobilists site of the model

When the driver wishes to park the vehicle, the driver has to switch to parking mode using the Touch screen display provided. The live location coordinates of the vehicle sensed by GPS module is compared with the parking zone coordinates predened in the Virtual trac management system by Raspberry Pi to indicate whether the location of car is in parking zone or not.

The haversine formula is used to find the great-circle distance (arc length) between two points of interest on a sphere whose longitudes and latitudes values are known.Let the be the central angle between any two points of interest on a sphere. It is given by,

where, d is the distance between the two points on sphere and

ris the radius of the sphere i.e. Earth. The haversine formula hav() is given by:

() = (2 1) + cos(1) cos(2) (2 1) where 1, 2 are the latitude of point 1 and latitude of point 2 respectively and 1, 2 are the longitude of point 1 and longitude of point 2 respectively. The haversine function of an angle is given by:

To get the distance d, the arc haversine (inverse haversine) is

applied to the central angle .

= 2 arcsin((2 1) + cos(1) cos(2) (2 1) )

Thus, the required value of d is obtained using Haversine formula.

A data ow diagram (DFD) i.e., a graphical representation of the ow of data in the processes involved in modelling aspects of the policy makers site is depicted in the figure 4.

Figure 4: Data Flow Diagram for the model

Figure 5: Sequence Diagram for the model

The sequence diagram is depicted in gure 5 it shows the sequence of tasks that should occur right from login to registering and adding, modifying or deleting the rules.

This section discusses how the designed model and process ow is actually implemented. This shows the actual GUI designed on both policy maker and automobilistside (displayed on the Touch Screen) and howthe software and hardware part are integrated and implemented. The implementation also has 2 parts:

This is the interface for the policy makers which will be the road safety department to formulate and deploy the set of rules corresponding to geographical locations.

A web application is designed for road safety department so that policy makers input and update road safety rules. The web application is designed using ASP.NET framework of Visual Studio tool.

Figure 6 shows the index page of the application. Index page is loaded with necessary static content like type of signs and about the project as shown in figure 7.

Figure 6: Welcome Page of Policy Maker Site

Figure 7: About the project page

Figure 8: Login Page of Policy Maker Site to allow only authentic users to change the policies

On hitting login button of the index page, login page pops up as shown in figure 8. By entering valid login credentials in the appropriate textboxes and hitting submit button, polic makers are authenticated which directs to the Admin page. Admin master page has following pages

Figure 9 shows the Latitude & Longitude page to add rules at given coordinate. The Latitude and longitude co- ordinates should be entered and the type of rule at that point has to be chosenamong the various rules using the drop down menu under category. Also, value for that rule (say 40kmph

for the speed limit) should be specified accordingly. Submit button inserts the rule details entered to the database table.

Figure 9: GUI at Policy Maker site to add rules at any co-ordinates

Figure 10: Detail of rules at any co-ordinates

Figure 10 shows View lat&lon page designed to display the database table contents with edit and delete options for each entry. Edit button redirects to Latitude & Longitude page where changes can be made. Update button updates the database table for that corresponding entry.

In this module, the hardware is resemblance of the vehicle whose response towards following the trac rules is being controlled. The prototype of the proposed model consists of Raspberry pi which acts as a controller, a GPS module to obtain live location of the vehicle, a touch screen display which mimics the infotainment display available in the real time vehicles as in figure 11 and figure 12.

A user interface is provided for the vehicle driver over a Touch Screen display for the user to switch between the driving mode and the parking mode, which also displays the live location coordinates, the rule that is currently applied on the vehicle, the live location address and the total distance covered from the start of the model. The GUI also contains text box in which the live location i.e., the latitude and longitude values are provided. The touch screen display that had been used in the prototype is Raspberry Pi compatible 7- inch HDMI touch screen.

Raspberry Pi model 3 is used in the prototype which controls the response of the model and also processes the rules being fetched from the database. It also processes the latitude and longitude being obtained from the GPS Module.

A Java code is continuously run in Raspberry Pi which has the following features. The code consists of GUI design script, Raspberry Pi GPIO pin declaration, web services (to access the database), functions to trigger control signals to the output drives.

A constructor is executed as soon as the object of the class is created. Within the constructor, the text boxes in the GUI are filled with the live location that is received by the GPS module. Also, the horn switch and the stop button are enabled.

There two threads that run in the background to continuously update the location and to display the rules in the GUI. These threads are enables one the start button on the GUI is enabled.

When the start button is enabled, the motor (resemblance to the vehicle wheel) starts to rotate at a initially specified speed (40 kmph as used the prototype) and the first thread i.e., to update the location is continuously is created and is started which executes a function that has an infinite loop to get the GPS location of the vehicle and displays it in the GUI and these coordinates are reverse geocoded to get the current address which is also displayed in the GUI. And also, retrieves the rules for this location using the web services. All the rules that are retrieved are at a distance of 10 km (in the prototype) from the current location.

Figure 11: Complete prototype setup

Figure 12: Components of prototype setup

All the rules that are obtained are stored in an array-list. The nearest among these rules is found out using the Haversine formula and the rule that is at a distance less the 20 meters is applied on the vehicle. The other thread that is run to display the rule that is enacted on the vehicle in the GUI. The rule to

be applied is processed and found out if the rule is speed limit, no horn or a no parking zone.

If the rule category is speed limit, the speed to which the vehicle has to change is scaled down by a factor 10. The scaled values of the speed ranges from 1 and 10. Accordingly, the PWM signal is sent to the motor driver which drives the motor in the specified speed. The change in speed is gradual. And also, the message for example, There is a speed limit of 40 km/hr in this area is displayed in the GUI.

If the rule category is parking, a message You are currently in the parking zone is displayed in the GUI. And checks if the switch provided to change to parking zone is enabled. If its enabled and the rule category is parking, the green LED is turned on indicating that the vehicle is in the parking zone and vice-versa.

If the rule category is no horn, the buzzer (resemblance to the horn in the vehicle) is disabled i.e., even when the horn button in the GUI is clicked, the buzzer is disabled. And also, the message This is strictly a No Horn zone is displayed in the GUI. The hardware prototype also contains power supply to power up the R-pi and the motor driver.

The model proposed was designed and implemented and was tested for all the test cases. A miniature vehicle was developed to perform the analysis. A set of rules were deployed in the database for different locations and was tested for the rules defined above namely speed limit, no horn and parking.

The results obtained were approximately as expected and were optimum. The rule access from the server and the processing time to change of rule totally took approximately 5 seconds with strong internet connection.

In regions where no speed limit is provided the speed of the motor (vehicle) changes in accordance with the input from the driver. It was seen that the motor run at full speed in these regions when user increased the speed. In the display no rules were displayed as those particular regions had no rules dened. In regions where speed limit is dened the speed of the motor only increased/decreased up to the specific rpm corresponding to the particular speed. Despite user trying to increase the speed beyond the given limit, the speed of the motor did not increase. The speed control in the regions dened with speed limit was tested.

In zones which are not dened as Parking zone, if we try to park (i.e., if the driver tries to switch from driving mode to parking mode) the green LED did not glow indicating the user that he cannot park in that zone. In the zones where it is dened as parking zone the user can successfully switch from driving mode to parking mode and the led will glow indicating that the automobilist can park in that area.

Figure 13: Screenshot of the touchscreen display

When the testing was done and executed the GUI in the touch screen showed latitude, longitude of the current location of the moving vehicle, distance covered by the vehicle, address of that location and also the rule that is fetched from the cloud and is executed as in the depicted in the figure 13.

This system presents a real time road safety monitoring to solve the problems caused due to breaking of rules. The proposed system provides a new way to control trac rules and a step towards autonomous vehicles which follows the traffic rules. The trac administration department (Trac Rules Department of Government) can use this real time road safety monitoring system to detect dangerous situations on the road and thereby react by imposing immediate actions. The data that are stored in database which is present in the cloud are all veried and validated. Automating the traffic rules over a vehicle will thereby cease the driver from violating the traffic rules. The system also helps to limit the speed of the vehicle thus by reducing the risk of accidents.

On the whole IOT along with our system, Trac Sense will play an important role in roadsafety monitoring system in real time scenarios. On including many other features like traffic signals, inter-vehicle communication, etc an advanced vehicle can be designed.

Vehicle movement control and accident avoidance in hilly track, by Jessen Joseph Leo, R.Monisha, B.T.Tharani Sri Sakthi and A. John Clement Sunder, International Conference on Electronics and Communication System JCECS -2014

Highly Automated Driving on Highways Based on Legal Safety by Benoit Vanholme, Dominique Gruyer, Member, IEEE, Benoit Lusetti, S´ebastien Glaser, and Sa¨d Mammar, Senior Member, IEEE , IEEE transactions on intelligent transportation systems, vol. 14, no. 1, march 2013

Cooperative Driving of Automated Vehicles with Inter-vehicle Communications by Takeshi Sakaguchi, Atsuya Uno, Shin kato and Sadayunki Tsugawa, IEEE Intelligent Vehicles Symposium 2000 Dearborn (MI), USA October 3-5,2000

Vehicle Control Algorithms for Cooperative Driving With Automated Vehicles and Intervehicle Communications by Shin Kato, Sadayuki Tsugawa, Member, IEEE, Kiyohito Tokuda, Member, IEEE, Takeshi Matsui, and Haruki Fujii, IEEE transactions on intelligent transportation systems, vol. 3, no. 3, September 2002

Real-time Driver Advisory Model: Intelligent Transportation Systems by James OBUHUMA, Henry OKOYO and Sylvester MCOYOWO,

IIMC International Information Management Corporation, 2018 ISBN: 978-1-905824-60-1

Smart Vehicles with Everything, by R.Srinivasan, A.Sharmili, Dr.S.Saravanan and D.Jayaprakas, 2016 2nd International Conference on Contemporary Computing and Informatics (IC3I)

IoT-Driven Road Safety System Dasari Vishal , H. Saliq Afaque , Harsh Bhardawaj and T. K. Ramesh by Department of Electronics and Communication Engineering Amrita University, Bengaluru at 2017 International Conference on Electrical, Electronics, Communication, Computer and Optimization Techniques (ICEECCOT)

Design of an Iot based autonomous vehicle with the aid of Computer Vision, by Mohammad Rubaiyat Tanvir Hossaim, Md. Asif Shahjalal and Nowroz Farhan Nur, International Conference on Electrical, Computer and Communication Engineering (ECCE), February 16-18, 2017, Coxs Bazar, Bangladesh

Safe driving using IoT sensor system by A. Jesudoss, Muthuram .B.O and Lourdson Emmanuel .A , School of Computing Sathyabama Institute of Science and Technology at International Journal of Pure and Applied Mathematics , in Volume 118 No. 20 2018, 3745-3750.

An Approach to IoT based Car Parking and Reservation system on Cloud by Vaibhav Hans, Parminder Singh Sethi and Jatin Kinra, Centre of information Technology University of Petroleum and Energy Studies Dehradun, India at 2015 International Conference on Green Computing and Internet of Things (ICGCIoT)

Titus, Joby and S Sanjana, V and B, Saranya and Manikandan, Pravin and Professor, Assistant. (2017) at Automatic Road Surveillance to Implement Vehicle Trac Rule. International Journal of Scientic Research. 4. 87-90.

Embedded controller for vehicle In-Front obstacle detection and cabin safety alert system by V.Ramya, B.Palaniappan and K.Karthick, at International Journal of Computer Science and Information Technology, vol.4, No.2, April 2012. [2] V.Ramya, B.Palani

IoT based smart and safe driving applications, by Dr. Thyagaraju G S, HMT Gadiyar and U B Sujit.

A Personalized Curve Driving Model for Intelligent Vehicle, by Jianming Xie, Jianwei Gong ,Shaobin Wu, Guangming Xiong, and Chao Lu at IEEE conference 2017

Control Strategy Design for Smart Car Auto-tracing with Visual by Hui Zhang and Yongxin Liu , College of Electronic Information Engineering Inner Mongolia University, Hohhot, 010021, China at 2014 26th Chinese Control and Decision Conference (CCDC)

IoT Based Sensor Enabled Smart Car Parking for Advanced Driver Assistance System by Mahendra B M, Dr Savita Sonoli and Nagaraj Bhat, RV College of Engineering.

Monitoring And Reporting Of Trac Rules Violation Using Microcontroller Through Wireless Communication System by Hossain, Mohammmad Bhuiya, Mu and Ahamed, Jamal Uddin and Bhuiyan, Tanvir and Bhowmik, Suman. (2018).

Proof of part of haversine formula? - Geographic Information Systems

Toward Automated Celestial Navigation with Deep Learning

This is our project from the deep learning in the cloud and at the edge course that we took as part of the UC Berkeley School of Information's Master of Information and Data Science program. We examined the feasiblity of automated deep learning-enabled celestial navigation by training a hybrid deep and convolutional neural network on synthetic images of the sky coupled with a representation of the time at which the images were taken. We implemented the haversine formula as a custom loss function for the model. We generated, pre-processed, and trained on synthetic images in the cloud and conducted inferencing on a Jetson TX2.

Skills demonstrated: Deep regression for computer vision | Implementing custom loss functions | The TensorFlow functional API | Training in the cloud | Generating synthetic data | Inferencing at the edge | Containerizing components

Languages and frameworks: Python | TensorFlow | Keras | ktrain | Bash | Docker

Continue on to read the white paper

1.1 A Very Brief Overview of Marine Navigation

Navigation, along with seamanship and collision avoidance, is one of the mariner's fundamental skills. Celestial methods were once the cornerstone of maritime and aeronautical navigation, but they have been almost entirely supplanted by electronic means—first by terrestrially-based systems like OMEGA [1] and LORAN [2], then by satellite-based systems like GPS [3], and then by mixed systems like differential GPS (DGPS) [4]. Unfortunately, electronic navigation is fragile both to budgetary pressure and to malicious acts [5][6]. Building and maintaining proficiency in celestial navigation is difficult, however, and a skilled navigator is still only able to fix a vessel's position a few times daily, weather permitting [7].

In current practice, marine celestial navigation requires:

  • A clear view of the horizon. The navigator measures angles of elevation with a sextant using the horizon as a baseline.
  • A clear view of celestial bodies. Two bodies are required to fix the vessel's position, but three are typically used in practice. Repeated observations of the sun can be combined to provided what is referred to as a running fix of a vessel's position.
  • An accurate clock. Without a clock, a navigator can only compute the vessel's latitude.
  • A dead reckoning plot on which the navigator estimates the ship's track.
  • For manual computation, a number of volumes containing published tables.

An automated hybrid inertial-celestial navigation system for aircraft, referred to as astroinertial navigation, exists aboard high-end military aircraft and missile systems. These systems rely on catalogs of stars and an estimated position developed through integration of accelerometer data [8].

1.2 Intent of this Project

We want to explore the possibility of applying deep learning to the task of automating celestial navigation. We entered this project wanting to answer the following questions:

  • Can we demonstrate the possibility of a viable solution to marine celestial navigation by casting it as a deep regression problem?
  • What is the minimum amount of information that we need to provide such a system? In particular, can we build a system that suggests—if it does not achieve—a viable solution that does not take an estimated position, as is computed in an astroinertial system or as is estimated by mariners through dead reckoning, as input?

We envision mariners using our system as a back-up to other forms of navigation, primarily for open-ocean navigation outside of five nautical miles from chartered hazards. A navigator would depart on their voyage with a set of models suited for the vessel's track much in the same way as a navigator today loads relevant charts either from removable storage or the internet prior to departing on a voyage. The system takes nothing more than the image returned from the edge device's installed camera and time from an accurate clock as input. In its final implementation, it will return both raw position output and an industry standard NMEA output suitable for integration with an electronic charting package. A navigator with internet access at sea can load additional models from the cloud as needed to account for adjustments to the planned voyage.

We made a number of engineering assumptions to make the problem tractable as a term project.

  • We are simulating images that would be taken at sea with synthetic images. We hope eventually to conduct an operational test of the system. We assume, then, that the installed camera on the operational system can replicate the resolution and overall quality of our synthetic images.
  • We assume the notional vessel on which the system would be installed is fitted with a three-axis gyrocompass to allow images to be captured at a fixed azimuth and elevation. The requirements of the stabilization system can be inferred by perturbing the synthetic images to simulate stabilization error.

Our system consists of a cloud component and an edge component. An image generator creates batches of synthetic images, names them using a descriptive scheme that allows easy indexing by location and time, and stores the models in object storage buckets indexed by location and time. The model trainer pulls from these buckets to create models specific to a bounded geographic area at a given with certain time bounds. These models are stored in object storage. The edge device—in this case a Jetson TX2—captures an image of the sky and the time at which the image was taken. The inference engine performs a forward pass of the model, returning the vessel's predicted location as raw output.

Our model is a relatively simple CNN tuned to provide reasonably accurate predictions over a navigationally-relevant geographical area over a short but relevant time period. We do not claim that this architecture is an ideal candidate for a production version of this project. We merely want to demonstrate feasibility.

Our goal was to explore architectures that could learn a vessel's position from an arbitrary image of the sky with the azimuth and elevation fixed and the time known. We explored both DNNs and CNNs but settled on the latter because CNNs can encode usable predictions more efficiently relative to deep dense networks.

There are a number of ways to attack this problem. Object detection could, for instance, be used to find the location of certain known celestial bodies in an image. We chose to cast our research as a deep regression problem. The literature suggests that convolutional neural networks have been applied to problems of head position detection [9]. This is a similar problem to ours in the sense that we are trying to find a mapping between translations and rotations of elements of the sky and the position of the observer at a given time. Our problem is different, however, in that the sky does not translate and rotate as a unitary whole. The path of the moon and planets, for instance, is not fixed relative to that of the stars. Stars do move together, however, on what is referred to simplistically as the celestial sphere [10].

Our network has two inputs. The image input ingests pictures of the sky the time input ingests the UTC time at which the image was taken. The images are run through a series of convolutional and max pooling layers. The results are concatenated with the normalized time and the resulting vector is put through dropout regularization, a dense hidden layer, and the regression head. The head consists of two neurons, one each for normalized latitude and normalized longitude. Latitude and longitude are normalized over the test area with the latitude of the southernmost bound mapping to 0 and the latitude of the northernmost bound mapping to 1. Longitude is mapped similarly. The output layer uses sigmoid activation to bound the output in the spatial domain on ([0,1], [0,1]) .

3.2 Implementing a Haversine Loss Function

We want our loss function to minimize the navigational error returned by the model. We initially used mean squared error across latitude and longitude as our loss function, but we found that convergence was slow and that the model frequently failed to converge to reasonable values. There is, of course a nonlinear relationship between latitude and longitude. Whereas a degree of latitude is consistently 60 nautical miles anywhere on the globe, lines of longitude converge at the poles and diverge at the equator. Using mean squared error, then, results in inconsistent results in terms of the model's error in distance. To correct this, we implemented a new loss function in TensorFlow based on the haversine formula which gives the great circle distance between two points defined by latitude and longitude [11].

The haversine function is given below. The value of interest, d, is the distance between two points defined by latitude longitude pairs [φ1, λ1] and [φ2, λ2]. The value r is the radius of the Earth and sets the units. We chose our r to provide an output in nautical miles.

The haversine function is strictly positive and is continuously differentiable in all areas of interest. Minimizing the haversine loss minimizes the error between predicted and actual locations, and the negative gradient of the function gives the direction of steepest descent in terms of the predicted latitude and longitude.

We must rely on synthetic images to train the model because we cannot otherwise capture future configurations of the sky. We adapted the open source astronomy program Stellarium for use in the cloud as an image generator. In this section we will detail our method for adapting Stellarium and will provide details for running the image generator in the cloud with images being stored in an S3 bucket.

Stellarium generates high-quality images of the sky for arbitrary geographic positions, times, azimuths (the heading of the virtual camera in true degrees), altitudes (the angular elevation of the virtual camera), camera field of view, and observer elevations [12]. Stellarium's functionality is aimed at practitioners and hobbyists, it has been in continual development since 2006. In addition to rendering celestial bodies, Stellarium can render satellites, meteor showers, and other astronomical events. The program uses a JavaScript-based scripting language system to automate sequences of observations.

4.2 Containerizing Stellarium

Stellarium is designed for desktop operation. Our Docker container allows the program to be run on a headless server in the cloud. The basic container operation is outlined in the figure below. A python script reads input from a YAML file and generates and outputs an SSC script that automates image generation. Stellarium runs using a customized configuration file that prevents certain modules from running. Stellarium executes the SSC script. The image files are saved either to local storage on the host or to an S3 mount point that the container accesses on the host system. From there, the images can be preprocessed using the Docker container discussed in section 5.1 below.

We built a Docker container to facilitate training in the cloud. The container is built on the base TensorFlow container [13] and facilitates deploying instances to allow simultaneous training of models representing different spatial-temporal regions. Our model is written in TensorFlow and requires two inputs—one image and one floating point value identifying the time at which the image was generated.

We provide a Docker container to handle the task of preprocessing images into Numpy arrays ready to be used in training. The container converts images into Numpy arrays with dimensions (None, 224, 224, 1) with a script that leans heavily on OpenCV [14]. Images are read from a single directory. The images are reduced to the the target resolution and are stacked in a single Numpy array. Time and true position are parsed from the input file names which follow the format <lat>+<long>+<YYYY>-<MM>-<DD>T<hh>:<mm>:<ss>.png . Date time groups are converted to numpy.datetime64 and are normalized on [0,1] based on the temporal bounds on the training set. Latitudes and longitudes are normalized on [0,1] based on the maximum extent of the geographic area under consideration and formed into an array with dimensions (None, 2) .

The data are broken into training and validation sets based on a user-provided split and random assignment.

We use the ktrain package to train the models [15]. The tensorflow.keras.model object is wrapped in a ktrain.learner object to simplify training and access ktrain's built in callbacks. We train using the ktrain.learner.autofit function with an initial maximum learning rate of 2-4. The training function applies the triangular learning rate policy with reduction in maximum learning rate when a plateau is encountered on the validation loss introduced by Smith [16].

The container saves models in the .h5 format to a user-specified directory that maps to the host. In this way, the model can be saved either locally or to object storage.

We established an experimental space from 36°N to 40°N, from 074°W to 078°W, and from 2020-05-25T22:00:00 UTC to 2020-05-26T02:00:00 UTC. We trained the model using 6,788 images reduced to single channel 224 x 224. We validated on 715 images similarly reduced in color and resolution. The base images were of different aspect ratios. The training routine uses a batch size of 32. The figure below details the application of the triangular learning rate policy.

Training loss converged to approximately 5.5 nautical miles and validation loss converged to just over 6.5 nautical miles after 52 epochs as shown below.

5.3 Performance on a Notional Vessel Transit

We consider a four-hour period in the test area defined above. A notional vessel is on a course of 208° true—that is, referenced to true north—at a speed of 12 nautical miles per hour or knots. A nautical mile is slightly longer than a statute mile. The vessel employs our system to fix its position hourly beginning at 22:00 UTC. The chart below is overlaid with the vessel's actual positions in blue and predicted positions in red.

Our model is trained using a spatial-temporal grid based on our intuition that the model is learning an interpolation and should, therefore, be trained on data spaced at regular intervals. Spatial interpolation was generally quite good. Temporal interpolation was more challenging, likely owing to our decision to limit the temporal density to 20-minute intervals during training. Mariners typically take fixes at regular intervals on the hour, half hour, quarter hour, etc. and our fix interval for the example above aligns with times trained on the temporal grid. We leave improving temporal interpolation as an area for further study.

6.1 The Camera Container and Broker

The edge implementation consists of three Docker containers on a Jetson connected via the MQTT protocol. The camera container 'creates' an image (in this case synthetically) and forwards it via the MQTT broker to the TensorFlow-powered inference container. Since we are using synthetic images, our current implementation of the camera container takes an image from a directory of images and forwards it for inferencing. The camera container displays images as they are loaded to assist in troubleshooting.

The camera container does much of the image preprocessing (primarily reducing any image to 224x224 and converting it to black and white). We do this at the level of the camera to reduce the size of the messages being passed through the MQTT protocol, thus making the message network more reliable.

The MQTT broker sits between the camera container and the inference container and acts as a bridge to pass images (and the file name which has the ground truth embedded it and is used for assessing prediction accuracy).

The inference container runs the TensorFlow model that was trained in the cloud. We found transferring models using the older .h5 format to be more reliable than more current Keras methods. On receipt of an image, the container further preprocesses the image, feeds the processed image forward through the network and displays the output as text. We also provide a measure of accuracy (using the ground truth which is embedded in the file names passed through).

The screenshot below shows the edge device at work. In the lower right you have the camera container. In the lower left you have the test image it generated (after preprocessing). In the upper left is the broker that provides communication to the inference engine. And in the upper right is the inference container with readouts from two different prediction requests sent to it by the camera container.

This section walks through a detailed example of generating synthetic data, training a model in the cloud, pushing that model to an object store, pulling the model to the edge device, capturing an image locally, and making an inference based on that image.

We begin by provisioning a virtual server on a cloud provider. Rather to storing to object storage, we will store the images locally to train the model on the same virtual server. We create a local directory to hold the images. We then clone the repo and cd into the image_generator directory.

For this case, we will use the ssc_gen.yml file as it appears in the repository. We want to build a total of 20,000 images. To speed the process we will use 10 workers.

Get rid of the now-unneeded workers.

Edit the ssc_gen.yml file to change the target directory to /data/image_test and build 1,000 test images.

7.2 Preprocessing the Images

We will preprocess the image files in place in preparation for training. To do this, cd to the preprocessor directory and build the dockerfile.

We will make a new directory to store our processed Numpy arrays.

Edit the preprocessor.yml file as necessary, spin up a container, and preprocess the images.

Once the preprocessing is complete and you have verified that the .npy files are where you expect, tear down the container.

Create a directory to house models.

Spin up the container and train a model.

You should now see your trained model in data/models on the host machine.

7.4 Spooling up Edge Device for Images and Predictions

[1] J. Kasper and C. Hutchinson, "The Omega navigation system--An overview," in IEEE Communications Society Magazine, vol. 16, no. 3, pp. 23-35, May 1978, doi: 10.1109/MCOM.1978.1089729.

[2] W. J. Thrall, "Loran-C, an Overview," in Proc. 13th Annu. Precise Time Planning Meet., p. 449, 1976. and Time Interval Applications and Planning Meet., NASA Conf. Publ. 2220, p. 121, 1981

[3] G. Beutler, W. Gurtner, M. Rothacher, U. Wild and E. Frei, "Relative Static Positioning with the Global Positioning System: Basic Technical Considerations," in Global Positioning System: An Overview., International Association of Geodesy Symposia 102, ch. 1, 1990.

[4] E. G. Blackwell, "Overview of Differential GPS Methods" in Navigation, 32: 114-125, 1985. doi:10.1002/j.2161-4296.1985.tb00895.x

[5] U. S. Coast Guard Navigation Center, "Special Notice Regarding LORAN Closure",

[7] M. Garvin, "Future of Celestial Navigation and the Ocean-Going Military Navigator," OTS Master's Level Projects & Papers. 41, 2010.

[8] R. Whitman, "Astroinertial Navigation for Cruise Applications." Northrop Corporation, 1980.

[9] S. Lathuiliere, P. Mesejo, X. Alameda-Pineda and R. Horaud, "A Comprehensive Analysis of Deep Regression," arXiv:1803.08450v2, 2019.

[10] NATIONAL GEOSPATIAL-INTELLIGENCE AGENCY, Pub No. 9, American Practical Navigator: an Epitome of Navigation, 2017.

[11] C. N. Alam, K. Manaf, A. R. Atmadja and D. K. Aurum, "Implementation of haversine formula for counting event visitor in the radius based on Android application," 2016 4th International Conference on Cyber and IT Service Management, Bandung, 2016, pp. 1-6, doi: 10.1109/CITSM.2016.7577575.

[12] M. Gates, G. Zotti and A. Wolf, Stellarium User Guide, 2016. doi:10.13140/RG.2.2.32203.59688

[14] G. Bradski, G., "The OpenCV Library" Dr. Dobb's Journal of Software Tools, 2000.

[15] A. Maiya, "ktrain: A Low-Code Library for Augmented Machine Learning," arXiv:2004.10703, 2020.

[16] L. Smith, "Cyclical Learning Rates for Training Neural Networks," arXiv:1506.01186v6, 2017.

Watch the video: How to calculate Distance between two city or 2 points lat, lon based on Haversine Formula