Geographicity? Say that 3 times fast.

Learned a new word today, geographicity. It’s in the title of an upcoming edX MOOC offered by a group of Swiss geographersExploring Human’s Space: an Introduction to Geographicity. A class designed to “explore how geography, cartography, urbanization and spatial justice play a role in shaping the notion of human space.”  Sounds marvelous and could be, if done well, an interesting entry into the somewhat opaque social-science side of my beloved discipline of geography.

The word itself – geographicity – is unlikely to ever make the OED. It was coined sometime after 1999 by two philosophers, Gary Backhaus and John Murungi. I first saw its definition (geographicity = the spatial component of all phenomena) in the preface of their 2007 book, Colonial and Global Interfacings: Imperial Hegemonies and Democratizing Resistances.  Geographicity also figures prominently in Esoscapes: Geographical Patternings of Relations, and Lived Topographies and their Mediational Forces.

passage about geographicityHere’s a passage from where I first saw the term discussed, from the preface of the Colonial and Global Interfacings books. Go ahead, read it through and challenge yourself to understand. I have, several times, and I’m still clueless.  Absurdly and gratuitously confusing academic-speak.

I really do have much respect for social theorists. Some of my best friends are social theorists. (okay, not really).  I’ve enjoyed the rich dialogue between fellow geographers about just this topic recently.  In this case, it’s philosophers writing and not geographers, but, still, I’d argue that this passage lies at the extreme edge of English-language communication.

Communicating with Maps Part 3: Considering uncertainty and error

An Exclusive Directions Magazine Series

In the third part of our series on Communicating with Maps, Diana Sinton discusses the complex and important ideas about the inherent role of uncertainty in the maps we produce. As a means of communication, published maps are trusted by the public well beyond what they may have earned. My theory is that so few people have ever made maps that they have no sense of how the data might have been collected, what decisions could have been made during map design, and how many opportunities for error the whole process provides, that they just accept a published map at face value.

But, if you were to hand someone a blank piece of paper and ask them to draw their hometown, the experience would be revealing. They may recall some topological relationships well — such as the sequence of streets between their home and school, or how to get to a friend’s house — but most people would also experience a tremendous amount of uncertainty. Maybe the results would include locational errors (drawing the school north rather than south of an intersection), or an attribute error (labeling a building a post office when it was really a bank). Just as likely, there would be blank areas in the sketch. Through this experience, the mapmaker would become aware of terra incognita and uncertainty about what was where.

In a similar way, every map contains imperfections. In his iconic book, Mark Monmonier explains how we lie with maps through manipulations and distortions, deliberate or otherwise. Uncertainty, errors, mistakes and omissions are inevitable. The complexity of the natural and social world must necessarily be simplified and generalized to be mapped, and there are necessarily subjective decisions that are made in the map design process. That’s just the way it is, even though few are aware of it.

Meanwhile, maps continue to be the most popular and common form of graphic representations of our natural and social world. They’re used worldwide in decision-making processes every day. That won’t change, but more could be understood about uncertainty and error within the realm of geospatial information.

The analog of statistics

Similar problems exist in the world of numbers. For example, a probability is a derived calculation of the likelihood of an event occurrence. The likelihood of any particular event outcome depends on how many total outcomes are possible. Statisticians use numerical confidence intervals to communicate the idea of how much variability there could be in the outcomes if one were trying to replicate that same measurement, pattern, etc. Graphically, confidence intervals can be represented as error bars depicting the possible variability around a measured value. Probabilities, confidence intervals and error bars are ways that we communicate about the uncertainty of measured, quantitative values in the social and natural world. Recognizing and acknowledging this uncertainty is part of the scientific process, though that can be a difficult message to accept.

There are equally as many ways that uncertainty, and error, are part of the mapping process, and standards exist for how to measure and document it. The National Standard for Spatial Data Accuracy, which in the late 1990s replaced the 1940s National Map Accuracy Standards, applies a root-mean-square-error approach, together with 95% confidence intervals, in determining the positional accuracy of geospatial data. Take a dataset of X and Y point coordinates that fall at the center of two intersecting roads and compare the distance to the same point coordinates already accepted as being true (because they were derived by high accuracy methods or by an independent source, for example). Once the RMSE is calculated between these two datasets, the NSSDA explains that:

"Accuracy reported at the 95% confidence level means that 95% of the positions in the dataset will have an error with respect to true ground position that is equal to or smaller than the reported accuracy value. The reported accuracy value reflects all uncertainties, including those introduced by geodetic control coordinates, compilation, and final computation of ground coordinate values in the product."

Requiring data to meet standards is one approach to managing uncertainty and reducing the probability of errors. Although assessing potential errors in data sets can be a challenge, undertaking such quality control efforts can build trust in an organization. A good example of this is the European Marine Observation and Data Network, which requires anyone contributing data to complete a Confidence Assessment step in the submission process.

Scale

One way to tolerate and mitigate uncertainty is modifying scale. Measurements of sinuous perimeters, such as coastlines, will vary significantly depending on the length of the unit of measurement. There is power in method, and more specific methods are perceived to be more powerful. Modern mapping is filled with situations where our methods don’t align with our measurements, tools or objectives. Our version of measuring with a micrometer, marking with chalk and cutting with an axe could be measuring with a smart phone, marking by heads-up digitizing and clipping with an XY tolerance of inches. Our use of geospatial data at particular scales, resolutions and precisions should be informed by and in alignment with our mapping intent, our acceptance of error and our tolerance for uncertainty. Mike Bostock illustrates this deftly with his explanation of geometric line simplification, and John Nelson reminds us of how absurdly false the decimal-place values of precision can be.

Cartographic solutions

Modifying scale or aggregating data may mask some types of uncertainty, while applying alternative cartographic solutions may be less of a compromise. For decades, cartographers have experimented with map symbols that are fuzzy, indistinct or partially transparent to indicate to the viewer that there is some degree of uncertainty associated with those corresponding data. Essentially these are cartographic versions of statistical box plots, which themselves can also become fuzzy to illustrate variability. Research has shown that certain types of visual variable characteristics, such as color intensity, value or edge crispness, are more effective at communicating uncertainty than assigning different shapes or sizes. Unfortunately, novel cartographic solutions such as manipulating common borders between polygons to suggest an uncertain zone of transition are more readily achieved with drawing than with mapping software at this point.

Choosing how to label values in a map legend can also give evidence as to how confident one is in the values. Select decimal place values that are appropriate for the data in question, and opting for a more vague and relative description, may be the right approach. “Lower” and “Higher” may be just the right way to describe the spectrum of data values being shown, particularly for mapping modeled probabilities such as erosion or wildfire risk.

Concluding thoughts

Sharing news about uncertainty in maps isn’t meant to bring a mapping effort to a grinding halt. Uncertainty within mapping is a given; ignoring it only promotes misuse of maps and undermines the credibility that they do deserve. Instead, expanding awareness may help us develop more effective ways to communicate information to map users and readers. It just goes back to the intent of the map. For example, current research is underway to determine effective techniques for deliberately adding uncertainty and errors to mapped data so that privacy and confidentiality of the data can be maintained while valid patterns are still displayed.

An additional benefit to expanding awareness about uncertainty and errors in maps and mapping processes, is the developing problem of location fraud within the world of location-based services. Or, as this article is quick to point out, the fact that fraud is only one source of location inaccuracy that the business world is realizing it must confront. There is a whole new commercial audience out there that needs to know about minimizing error and uncertainty in the world of mapping and spatial analysis.

Our exclusive series, Communicating with Maps:

Communicating through Maps Part 1: Exploring the challenges and complexities of GIS mapping 

Communicating with Maps Part 2: Discussing the issues with CaGIS President Sarah Battersby

Communicating with Maps Part 3: Considering uncertainty and error

Selected References for Communicating with Maps

Uncommon situations that warrant spontaneous purchases

I’m finding the development of location-based services to be both intellectually intriguing and amusing.  The ThinkNear mobile advertising business endeavor of Telenav intrigues me because I like the ways they’ve grappled with explaining the complexities of geospatial location to business-minded novices. Seeing their current home page ad has been an amusing highlight of my day. Definitely an advertising location_based_services_ad (800x385)idea thought of by a man, but I admit it’s clever. Makes me want to send it to my friends and see whether they get it.

Foo Fighter frenzy

Until this summer, it’s been years since I’ve paid attention to the Foo Fighters. They were always one of Chris’s bands, along with ones like Rush, King Crimson, Sugar, The Charlatans, Urge Overkill. Usually there were a couple of songs from each group that I liked, but for the most part, I couldn’t really listen to some of these bands for very long.  I’d try. How many times I’ve tried to listen to Bob Mould, knowing how much C loves him. But I just can’t, and that’s okay. There are plenty of musicians that Chris and I have in common, and plenty of hours of life for us to enjoy our own favorites.

Summer 2015, the renaissance of the Foo Fighters in my life. It began in July when the internet brought me the story of these wacky Italians who organized 1000 of their closest friends to play Learn to Fly together as a mass invitation for Dave Grohl to play them a concert.  Of course Dave agreed, in his own charming way. I loved the group video and spent some days thinking about what person or group I adore enough to make such an attempt. Honestly, can’t think of anyone. Though I once almost bought a $500 plane ticket to see Rodrigo & Gabriela again.

Anyway, listening to Learn to Fly just reminded me of how much I did enjoy certain FF songs. Which meant some evening sessions playing their old albums, and then Chris brought home a few episodes of Sonic Highways on Netflix. And for 3 nights this week, we learned about the music scenes of Chicago, Washington DC, and Nashville. I now have a fan-crush on Dave Grohl.  Did you know he grew up only about 20 miles away from me, in the greater Washington DC area?  Did you know he’s actually 2 yrs younger than me?  Needless to say, our paths never would have crossed during our adolescent years. His first concert was the Rayguns. Mine was the Beach Boys, followed a few months later by Shaun Cassidy.

Fast forward 40 years. Those first few episodes of Sonic Highways were great. I learned about how clueless I am about the DC rock scene, and all rock scenes for that matter. I’ve only been to the 9:30 Club once in my life, to see Adrian Belew and The Bears, maybe sometime in 1986 or 1987, after his King Crimson years. How did I even know about Adrian Belew, or King Crimson?  Chris, of course.

Sonic Highways taught me too about the Zac Brown Band, and Tony Joe White, and this thing called Go-Go music. Yes, that musical style was being launched during my childhood and teenage years in the DC area, and I was ensconced in my suburban neighborhood.

Zac Brown is playing next week in Saratoga Springs, less than 4 hours from where I live.  Road trip!

Why I’ll Be at Bates College Next Week

I’m a geographer by training, specifically a geographic information scientist. For about 20 yrs I’ve been teaching people how to make maps (via GIS and other digital mapping techniques). Though now I mostly teach undergraduate and graduate students, I’ve also had the pleasure of running almost 100 different professional development workshops for fellow faculty, academic staff, librarians, the general public, and the occasional group of children.

The faculty that I’ve worked with – especially during my years with the National Institute for Technology & Liberal Education (NITLE, 2003 – 2007), and then as the Director of Spatial Curriculum & Research at the University of Redlands (2007 – 2011) – represent a very wide range of disciplines, from probably almost 20 different academic departments. What they all have in common is that they are NOT geographers, and virtually all of them would say they are unfamiliar with a geographic way of looking at the world. So especially during the first few years I was very curious as to why they were all so keen to learn how to use GIS. Though several knew enough to say “spatial analysis,” the overwhelming response was “visualization.”  They wanted to see the patterns of their data, and overlay them with a myriad of other layers of information.

At some point, someone also said to me, maybe back in 2004 or so, that they found “spatial thinking” to be very powerful. I wasn’t even sure what those words meant together, and I was a geographer.  So began my lengthy quest to understand “spatial thinking” better. I started reading the research done by psychologists who specialized in “spatial cognition,” and talking with them at conferences. I sought to understand what and how their assessment of mentally rotating 3-dimensional objects, in abstract or table space, had anything to do with my use of geographical data in landscape-level space. By the time the National Research Council published the Learning to Think Spatially report (National Academies Press, 2006), I’d discovered my tribe. It’s filled with people from all different backgrounds (geography, geosciences, STEM, psychology, engineering, architecture, art, design, dentistry, etc.) with a passion for how spatial informs our world. Together with a few friends, I wrote The People’s Guide to Spatial Thinking (NCGE, 2013), as an attempt to communicate these ideas as broadly as possible.

One dimension of this (no pun intended) that intrigues me is how frequently the term “visual” is used in situations where, to me, it’s clearly a “spatial” situation.  And this is what I’ll be exploring more during a talk I’ll be giving next week at a Gordon Research Conference on Visualization in Science and Education, which Bates is hosting.

In the absence of visual impairment, our sense of sight is how we perceive the majority of information from the external world. So when someone says to me, “I’m a visual learner,” I can’t help but wonder what exactly they mean by that.  If they show me a sketch they’ve just made of how their Cousin Earl fits into the family tree, or a set of Ikea instructions, or a graph of recent economic data, or a bracket diagram of basketball teams at the end of a season, or a map of how Ebola infections spread over time, it’s actually the SPATIAL arrangement of information through which meaning is extracted, not just the fact that you’re using your eyesight to access the image or representation. Visual thinking, in that you need to write down someone’s name or phone number, and look at it, to give yourself any chance of remembering it, versus just having them say it out loud to you once?  Yes. Visual trumps aural.  But sketching a little diagram on the back of an envelope, to explain something?  Spatial, enabled by vision.

Spatial thinking is an ability to visualize and interpret location, position, distance, directions, relationships, change, and movement through space. STEM learners constantly need to extract meaning through, and communicate with, these internal and external representations, and the spatial thinking necessary to do that well is chronically under-recognized, under-valued, and under-taught.

Communicating with Maps Part 2: Discussing the issues with CaGIS President Sarah Battersby

An Exclusive Directions Magazine Series

In the second part of our summer series on Communicating with Maps, Diana Sinton discusses issues and advances in mapmaking with cartographer Sarah Battersby, a research scientist at Tableau Software and currently the president of the Cartographic and Geographic Information Society.

Q: What are some of the key developments that you have seen in cartography in the last decade?

A: I think that one of the most exciting cartographic developments in the last decade is the explosion of online mapping and tools for map design. It’s amazing to think about the huge efforts that have gone into making it easy for people to visualize their spatial data, whether as a Google Map mashup, using desktop or online GIS, with d3 or other scripting libraries, etc. The downside to all of this is that I think it is still too easy to make a bad map, and way too easy to distribute that bad map to a wide audience. My cartographic archive of what not to do just keeps growing thanks to all of the great finds on Twitter and Facebook.

On the other hand, there are a lot of people who really care about helping others work with and understand spatial data and there is some great research in cartography, GIScience and in spatial thinking that I think will help shape the next generation of tools that we use to design maps to make them more intuitive, more beautiful and generally more effective for understanding spatial data.   

The growth of the open source geospatial community has also been impressive. It is exciting to see so many people dedicated to improving the world of geospatial data and technology, and to helping the world with geospatial, like the work coordinated by the Humanitarian OpenStreetMap Team.  I think this open source momentum is key in the future of cartography and GIS.

Q: People often bring up the issue that Web Mercator is used as a default projection with web maps. That creates a tension with all of us who were taught in cartography and GIS classes that the Mercator projection is almost always inappropriate for the maps we’re making; it grossly distorts areas toward the poles and is presumed to give people false ideas about the size of countries and even continents. How much of a problem is this really? Can we cross fear-of-Mercator off of our worry list?

A: A few years back I did a bit of “forensic cartography” research on this to try to figure out how Web Mercator became the standard, and I think it is because of the success of Google Maps — the projection is even occasionally referred to as “Google Mercator.” Other online mapping systems changed projection to match. I’m not sure what the logic was behind the original selection of the projection, but it is easier to tile a rectangular projection, and the equations for Mercator are simple.  The conformal property of the projection is also nice for local-scale mapping.  But…is it the only choice? I imagine that any rectangular projection should tile nicely, and I imagine that it won’t be too many years before we have online mapping systems that don’t tie us to a single projection. For instance, Bernie Jenny has done some amazing work with adaptive map projections.

As for the distortion in the Web Mercator projection, I think this is a significant issue for visual analysis.  I’m a big believer in one of Egenhofer and Mark’s principles of Naïve Geography, that “maps are more real than experience.” I have thought of this as the map becoming our source of truth; even if people know that there is distortion in the map I think there are very few people who can successfully compensate for it in reading the map. This is a significant problem for any distance or area-based analyses calculated in Web Mercator coordinates, as well as for the map reader trying to visually make sense of spatial patterns.

I definitely wouldn’t cross Web Mercator off of our list of things to worry about. It is imperative for map designers to be actively thinking about and addressing issues with projection, otherwise their analyses may be hugely incorrect.It is also important for map readers to be cognizant of the distortions in Web Mercator and other projections. I don’t mean that I expect people to be able to identify and calculate distortion, just to maintain a healthy skepticism with their map reading.  

Q: What do you think are the top “gotcha” issues for mapping today, from the perspective of a cartographic software designer? What about from the perspective of John Q. Mapmaker?

A: I think that every cartographer has a set of pet issues that they always look for. For me, I often focus on classification and data normalization. It drives me crazy when I can’t figure out how the mapmaker decided to break up the data into classes. Are those quantiles? Natural breaks? Do the breaks have meaning? Class breaks make such a huge difference in the resulting pattern on the map and it drives me crazy when I see the default 5-class natural breaks map without any explanation. To me this is the sign that the mapmaker doesn’t know much about the data.  

I also see way too many maps that are really just population maps. Should it be a surprise that locations with more people tend to have higher counts of all sorts of other attributes? This is another problem of not thinking enough about the data. If you don’t know your data well, how do you make a map that tells a clear — and appropriate — story? 

Q: You have the perspective of having taught students about mapmaking for many years, and have done much basic research in cartography. Now you are in the position of working with software designers to help them implement good mapmaking principles to help users of commercial software design more effective maps. How is this shift from basic to applied research working? How has it changed how you pose research questions?

A: It is great to focus on specific, applied problems tied to facilitating how people ask and answer spatial questions. There is still much to think about in terms of general cartography, but now that we’re at a time when it is so easy for anyone to take a dataset and turn it into a map, I think about what we can do to help people make better maps faster. My research has always focused on how people understand and use spatial data, so there hasn’t been a change in my research direction, but I have done a lot of stepping back to what I would call the “cartographic primitives.” Lately I’ve been doing a lot of thinking about very basic questions of what information we need to obtain from maps and what characteristics of a map would facilitate finding answers to these questions. I also spend a good bit of time thinking about what makes an interesting pattern on a map and how I can help someone make better choices about their map type, colors or classification to uncover these interesting patterns.

Essentially, I feel like the questions I face now are based on how we can take our collective research and applied knowledge about designing better maps and put it to use helping people that don’t have decades or even semesters of work in cartography. It’s an amazing challenge and hopefully I can do some good to help the world see and understand their spatial data more effectively.

Our exclusive series, Communicating with Maps:

Communicating through Maps Part 1: Exploring the challenges and complexities of GIS mapping 

Communicating with Maps Part 2: Discussing the issues with CaGIS President Sarah Battersby

Communicating with Maps Part 3: Considering uncertainty and error

Selected References for Communicating with Maps

Elevation data: Where to go and what to know

Digital representations of the surface of the earth are a key data set for many GIS projects, but locating, identifying, downloading and manipulating digital elevation data is not for the faint of heart. There are many different skills required and hundreds of tools, systems and instruments from which to choose. In this article, author Diana Sinton highlights available resources and need-to-know information.

Introduction to the digital elevation model

The most common form of digital representation of the surface of the earth is presented as values of elevation above sea level, often derived from sampled point measurements and represented in raster formats as a digital terrain model or digital elevation model (DEM), or as a vector triangulated irregular network (TIN). Apart from generating a topographical surface itself, these data are also the basis for deriving slope gradient, slope aspect and hillshade relief.  Digital elevation data are central to transportation planning, land use planning, and geological and hydrological analyses, among countless others.  For this article, we’ll focus on DEMs as a generic format of elevation data in digital form.

For many years, the most common source and scale for a DEM were the 10-meter and 30-meter resolution data organized and distributed by the US Geological Survey to align with their 7 ½ minute topographic quad sheets. These original DEMs were derived from traditional photogrammetric methods or reverse-engineered from contour lines. Errors and inaccuracies abound. Nine times out of ten, one’s area of interest was situated at the intersection of four quad sheets, so there was great rejoicing when it became possible to download “seamless” elevation data, foregoing the need to edge-match or mosaic multiple data sets together. 

Measuring the horizontal resolution of elevation data often refers to spherical units of arc seconds, or 1/3600 of a degree. One arc second represents approximately a 30-meter grid cell.  Accordingly, a one-third arc second of measurement is approximately ten meters in distance, and a one-ninth arc second is three meters. However, these measurements hold true at the equator, when both latitudes and longitudes are evenly spaced.  Once distances are measured towards the poles, longitude measurements begin to converge and regular grid spacing becomes distorted.  By the time one is measuring in arc seconds at 49 degrees latitude, an arc second of longitude has shrunk to 20.25 meters and grid cells have become elongated in shape. 

Becoming familiar with the arc second system of horizontal measurements is a worthwhile investment of time when navigating elevation data sites, but it may be even more important to understand the absolute and relative vertical errors within DEM data. The original production goal of the 7 ½ minute USGS quads included a vertical accuracy standard of 7 meters, and up to 15 m variability was permitted (USGS Data Users Guide, pdf).

DEM meets Big Data in the US

Fast forward to 2015 and digital elevation information has intersected with the Big Data movement. In the United States, the National Elevation Dataset (NED) has replaced the former system of quad-based DEMs.  Significant efforts have been made to ensure that the horizontal and vertical datums, elevation units and projections or coordinate systems have been made consistent or, where needed, optimized for that locale. Root mean square errors for vertical accuracy have fallen to less than 2 meters within much of the NED collection.  Light Detecting and Ranging, aka LIDAR, data, and interferometric synthetic aperture radar, aka IfSAR, have become the standard approaches for high resolution data collection, and this has allowed for improvements and upgrades throughout the United States. Unlike the bare-earth presumption of DEM data, these new sources also provide detailed data for what is on the surface of the earth, for example the heights of vegetation and structures. The use of new technologies has been particularly important in states such as Alaska, where conditions had never previously permitted consistent and high quality data to be collected.

Of course there are times when it is both desirable and necessary to access older data, particularly when needing to make comparisons between before-and-after geomorphic changes following earthquakes and volcanic eruptions. For such purposes, the USGS also maintains a collection of historic DEMs.

Global data resources

When elevation data outside of the U.S. is needed, two important sources include data derived originally from NASA’s Shuttle Radar Topography Mission, as well as the more Advanced Spaceborne Thermal Emission and Reflection Radiometer global digital elevation model, now at Version 2.  Since its original collection in the year 2000, the SRTM data has been corrected and revised, and its 90-meter resolution coverage is some of the most comprehensive world-wide.  ASTER's Global DEM data has also undergone revisions and corrections, and its one arc second, 30-meter, resolution extends to even broader global coverage. 

New satellite technologies and demand for higher resolution and more consistent data are driving the growth in digital elevation data advancement today.  In 2010, DLR, Germany’s national aeronautics and space research center, launched the TanDEM-x satellite to partner with the already-orbiting European TerraSAR-X and is now producing data designed to be high resolution, with great vertical accuracy, and as consistent and reliable as possible in their coverage.

In the U.S., the current 3D Elevation Program has brought together multiple funding entities to produce and distribute nation-wide LIDAR data coverage, with IfSAR-based data in Alaska. Acquiring and processing these data will take years, but there is wide agreement that it is a wise investment with extensive benefits for the public and private sectors alike. The specter of sea level change has also compelled NOAA to prioritize LIDAR-based topographic data for coastal regions

Locating, identifying, downloading and manipulating digital elevation data is not for the faint of heart.  New interfaces for data discovery such as Reverb|ECHO come complete with 317 platforms, 658 instruments and 717 sensors from which to choose. Even the simpler National Map and Earth Explorer assume that users are familiar with the optimal spacing of LIDAR point clouds, arc second measurements, and the deciphering of acronyms.  OpenTopography is specifically designed to lower the access barriers to high resolution data, but to date the availability is limited. 

My advice? Give yourself plenty of time to sort out what’s available for your area of interest and what you really need for your project or application. Being able to find exactly the data you seek, download it, figure out and manipulate its compression format, modify its projection or coordinate system and successfully add it to your project is likely to require persistence, patience and the knowledge of a rocket scientist.  Or two. 

March Map-Madness at Davidson

It’s been a while since I’ve posted, but I just heard this story and I felt like sharing it widely.  Undergrad students at Davidson making maps of basketball plays and helping their team – via their coaches – be more successful than ever.  They watch the game VERY CAREFULLY and plot the data.  This is manual data collection, then manual data entry. Not, as the NPR story suggests, the same way that the big guys do it now, with lots of overhead cameras.  And now that Kirk Goldberry has hit the big time in letting the world know about this strategy, it’s surely becoming a more widespread practice.

I like that the guys at Davidson have figured it out on their own. I like that the first time, they turned in a “5-page essay” of the results to the coaches, and discovered how less-than-helpful that was.  So now they produce the much more visually effective “heat maps” and help the team learn about their competition, spatially, before tip-off.  And that they have the work-flow down to 10 minutes?  Give these guys a hand.  And, @NPR, next time – it’s okay to say “spatial analysis” and “GIS” as well.

Of course, smart undergrads have been doing this exact thing for a while. Like the Travis Gingras from St. Lawrence who did this with hockey, almost 10 years ago!

Winter Snow Landscape Art

Check out these snow patterns.  This takes a LOT of planning and significant capacitsnow-4y for spatial visualization in the process!  Especially interesting how only with the curve of the road in the lower right do you have any sense of scale.

h/t to Nag on the Lake.

Future of R with GIS

I was a total newbie to R before spring 2014. Then it was a little trial by fire, trying to learn just enough to keep up with grad students in a class I was co-teaching. Thank goodness for the “co-” part, as my partner was an expert in the topic, and I could contribute in my own areas of expertise, which were/are not R!  But I finished the semester with a new-found respect and, frankly, awe for what is possible with R. I have much to learn, and maybe, someday, the time.

Fast forward a few months and the topic keeps cropping up.  I shared a beer in Salzburg with Lex Comber and learned about one of his forthcoming publications, an Intro to R for Spatial Analysis and Mapping. Haven’t got my own copy yet, but if it’s what it seems to be, it’ll be one of my assigned texts in the future. In one of our webinars, Trisalyn Nelson spoke about her use of R with her graduate students. And today, I silently scanned through Alex Singleton‘s recent presentation on the Changed Face of GIS, in which R figures prominently for him.  There’s something going on here that some smart people have figured out.