Category Archives: Directions Magazine

Peering into the IS of GIS

Wade Bishop teaches in the School of Information Sciences at the University of Tennessee, Knoxville. He is particularly interested in geographic information organization, access, and use, as well as the study of GI occupations, education, and training. From 2012-2015, he was a co-investigator, along with Tony Grubesic of Arizona State University, of the Geographic Information Librarianship Project, which was funded by the Institute of Museum and Library Services. GIL sought to understand and document the types of knowledge, skills, and abilities that geographic information librarians need to be successful in their positions. Direction Magazine’s Diana Sinton recently interviewed Dr. Bishop about the project and its final product, the new book from Springer, Geographic Information: Organization, Access, and Use.

DM: The Geographic Information Librarianship Project has been the only program of its type.  What was the motivation for the project?

WB: Well, when I was a graduate student, I fortunately met Pete Reehling, the Geographic Information Systems Librarian at the University of South Florida, and his enthusiasm for GIS and the fact that there were (and still are) many job postings in libraries, museums, archives, and data centers, got me interested in learning more. The library school did not offer courses, so I learned GIS through electives offered in geography. I was the only library and information science student in the GIS classes, but I quickly noticed that my LIS skills were very useful in crafting search strategies to discover data and keep data organized once it was found. The curricular gap was obvious to me: Information science expertise for digital curation, metadata creation, information retrieval, and user experience design that would benefit GIS education. Also, LIS programs needed GIS courses to meet workforce demand for the growing number of information professionals working with geographic information. So, the GIL project was distinctive in that it was the first and only of its kind. Sure, some schools have offered GIS as a fun elective once in a while, but GIL built a purposive pathway, produced through curricular development informed by current practitioners, and focused on the “abilities to locate, retrieve, analyze, and use geospatial data” — not just teaching GIS software. This Springer book is the final outcome of the project, and codified the lectures into book chapters so that more people could benefit from the content.

DM: Producing a manageable set of student learning outcomes over the course of GIL was a major accomplishment. Now that the funding has ended, are these and other outcomes being adopted and implemented in curricular programs?

WB: We surveyed practicing GIS and map librarians, archivists, and other information professionals to validate the core competencies established in 2008 by the Map and Geographic Information Round Table. We assumed that professionals in the real-world could weigh in on the most important items to cover in these electives, and they have been adopted here at the University of Tennessee. The core student learning outcomes likely will not change, given the fundamentals of geography, cartography, and information science related to organization, access, and use do not alter that much with each new geospatial technology or data format. MAGIRT is currently revising their core competency document and that will inform future versions of the courses, but I think the core of knowing what GI is, and how to discover and curate it, will stay the same at the introductory levels covered in the book.

DM: I’ve heard you describe the world of GIS as being two-thirds “information science,” while you also recognize that it is two-thirds “geographic information.”  From your perspective, straddling those worlds with the work you do, have you found it easier for librarians and other information scientists to learn about geography and geospatial data, or for geographers and geospatial data experts to learn about library and information science?

WB: Acronyms cause problems in every field. It has been easy to poke fun, even if most don’t find it funny as a GIS outsider. I had a unique perspective in GIS classes as an information scientist (a term from 1967) and frankly found it odd to read about the GIS wars and geographic information science/systems debates in the discipline. In information science, simply defining the term information itself stirs the intellectual waters, and a science for all systems reigns supreme in our IS acronym as systems change in any networked environment.

But to answer your actual question, the difficulty of learning anything new depends on the individual and their own motivations. Certainly, the most successful people in any field are those life-long learners that never stop acquiring new skills and remain on the cutting-edge of their areas. These meta-disciplines are full of experts that are used to being agile learners much like Eratosthenes, chief librarian at Alexandria and inventor of geography. For example, librarians might be liaisons for several disciplines and must retain and gain a breadth of knowledge to best meet different information needs and now many resources. When it comes to finding stuff and keeping it organized, IS has a long history and great strength in that area, albeit mostly for documents or text-based information. One of the purposes of the book is to formally introduce information science regardless of what that ‘S’ in GIS may mean to readers.

One noticeable barrier between the two groups is that at the heart of many traditional information agencies is the concept of sharing, or, at least, connecting users with available information that meets their needs. I am confident that geographers and geospatial data experts understand their own GI better than anyone else ever could, but once one shares data beyond the typical user community, problems and questions are likely to arise. Those charged with answering those questions may face difficulties. With born digital objects, the distinction between where data ends and metadata begins is problematic. I do see a clear distinction between the two groups in their understanding of metadata, but that may not be worth unpacking here, other than to say that it does impact the reuse of GI by empowering users to determine fitness for use.The metadata chapter of the book covers the terminology, value, and knowledge organization concepts in greater detail and would be a great stand-alone primer for those needing an introduction to, or updated review of, metadata.

DM: What advice would you give to geospatial professionals who want to know more about work opportunities in the library or information science community?

WB: Apply. There are many professionals that come from GIS to LIS given there are more openings than LIS graduates with these skills. It’d be great if more iSchools taught this specialty, but that requires more faculty with this expertise. Most academic jobs are posted on MAPS-L. One GIS librarian coming from geomatics said it best when I asked him why he moved to working in a library: “At the library, every day is different. Each user and their questions lead to new things, which is more fun than the monotony of the same analyses day after day, over and over again.” Additionally, I believe there are plenty of work opportunities for geospatial professionals in their current organizations related to information and data. I think many would benefit from additional workshops and training on data curation offered through several information science programs. If nothing else, there is this new book that reveals the interstitial research spaces between GIS and IS, including information organization, data discovery, fitness for use, user experience design, information services, human information seeking behavior, and digital curation.

It Takes a Village: Intersections between Geospatial Professionals, Governments and Educators

Significant and widespread accomplishments involving digital technologies at a national level, whether in schools or homes or businesses, are possible through cooperative planning and creative partnerships. The larger, more ambitious the project, the more coordination – and long-term commitments – will be required to increase the likelihood of measurable success. Given the ways in which geospatial technologies cross the sectors of government, infrastructure, and education, it is no surprise that examples from the world of geospatial technologies are emerging.

In 2007 the Uruguayan government launched Plan Ceibal, a plan to provide a laptop computer for each child enrolled in a public school, and made a parallel commitment to expand and provide high-speed Internet access across the country. Since then, reliable Internet access has enabled notable national programs in health care, agriculture, and social services, as referenced in this video, Uruguay Digital 2015. The Internet has also allowed Plan Ceibal to pursue and expect increasingly innovative usage of those laptops, such as providing online instruction for learning English and accessing open educational resources that are aligned with school subjects.

Of course, there’s a place for GIS in this mix too. The Ministry of Transportation and Public Works’ National Bureau of Surveying has partnered with gvSIG, a Spanish association of developers of an open source GIS software, on gvSIG Batoví (Spanish) which aims to be “GIS applied to educational environments intended for Plan Ceibal and based on gvSIG.”

However, the initial instance of gvSIG Batoví was designed only for use with the limited operating system of the Ceibal laptops, which is ultimately limiting for an initiative with broader potential and ambitions. Thus the partnership has led gvSIG to develop gvSIG Educa, a prototype for what a country-specific, educational GIS might look like. The idea is that both students and teachers would have access to a GIS that comes complete with numerous layers of data at many relevant geographic scales, and the users can combine these to produce their own maps that help them reach their educational goals. On-going efforts to develop and enhance the platform have been aided by contributions from the global OSGeo community, such as a recent contribution via Google’s Summer of Code.

Meanwhile in Uruguay, activities continue that are mutually beneficial to all sectors involved. Workshops and classes have been offered to both teachers and students, and the platform is being shared with future geography teachers in their teachers’ college. Prepared materials (in Spanish) for those events, such as this manual for a workshop for secondary students and this one for geography teachers can be downloaded from the OSGeo website. Through their involvement with these programs, the National Bureau of Surveying has opportunities to share its activities with potential future employees, and the data being produced as part of the spatial data infrastructure of Uruguay is reaching new national audiences. The Geospatial Information Technologies Working Group of the University of Uruguay’s College of Engineering, another contributing partner in the project, can connect too with both prospective students and relevant government departments.

In other countries, some partnerships are less formal or official but the activities are equally valuable. For example, in Belize the Esri distributor, Total Business Systems, Limited, is generous in the ways in which it provides GIS-based visualizations of data of national interest. During the national presidential and congressional elections in 2015, they produced live maps to be shared online and over TV as results were being returned. To help put the results into a historical context, they produced an Esri Story Map that highlights changing electoral patterns and enables simple comparisons of the general election results over the last 30 years. This is but one of the map series available in the Belize GIS Education Portal that TBSL has built and maintains.

Companies such as TBSL donate time and effort to educational activities and resources because they are committed to long-term outcomes and the value of geographical thinking for an educated citizenry. Issues that have a specific geographic context are on the minds of many Belizeans, such as the disputed border with Guatemala and the risks associated with seasonal hurricanes and flooding. Using geospatial technologies like GIS to understand these topics is a no-brainer, and it isn’t difficult to get students excited about the technologies. TBSL just hosted its 5th annual World GIS Day Expo in November and over 900 students attended. Among the exhibitors were the Statistical Institute of Belize, the Belize Police Department, the Belize telephone company, and the Coastal Zone Management Authority. Creating opportunities for students to see diverse applications of the technologies in both the government and private sectors is an obvious but fundamental step towards future workforce awareness.

As in Uruguay, educators in Belize are also learning about the possible roles for geospatial technologies in teaching and learning. The same week of their Expo, TBSL organized and hosted two workshops for primary, secondary, and tertiary school educators that focused on the potential for use of GIS to support learning across the curricula. (Full disclosure: one of this article’s authors, Diana Sinton, was an instructor in those workshops.) These may even have been the very first GIS educational workshops in Belize, and the country has no particular “champion” within its government that is currently promoting and encouraging the use of educational GIS, but even baby-steps eventually lead somewhere.

   

GIS Day 2016 in Belize

There is no single one-size-fits-all model or type of partnerships among commercial, governmental, public and private entities when it comes to GIS and education. Instead it’s a series of evolving dances and multiple partners will alternate taking the lead. Both top-down and bottom-up approaches have their time and place, as well as the use of proprietary and open source software solutions, and all of this will be taking place concurrently anyway. When a government opens educational doors with programs like Plan Ceibal in Uruguay or ConnectEd in the United States, companies like gvSIG or Esri might be well-positioned to get their respective GIS feet in those respective doors. Or, sometimes a local voice for a larger company plays that role, like when Spatial Innovision Limited signed on to manage the GIS licenses for dozens of Jamaican schools on behalf of the government.   

Ultimately, success is still dependent on the community to sustain and nurture the programs beyond their initial marketing and document-signing phases. It’s the boots on the ground that count in the end, so whether it’s GeoMentors or Geo For All, make sure you build the human connections into the plan. 

Journal of Geography Article Earns National Council for Geographic Education Accolade

A joint effort from Esri Education manager Tom Baker and a research group of seven university faculty members was selected as the Best Article for Geography Program Development by the National Council for Geographic Education (NCGE). Published in the Journal of Geography, the co-authored piece, entitled "A Research Agenda for Geospatial Technologies and Learning," provides a blueprint for advancing the study of geospatial technology (GST) in relation to education and learning.

“Research that advances understanding from and in GST has long been sparse, so the methodology outlined in A Research Agenda for Geospatial Technologies and Learning is not only insightful, it’s also an innovative asset for future studies to come,” said Zachary Dulli, NCGE chief executive officer. “As a collaborative effort of interdisciplinary academia and experts in spatial cognition, the resultant agenda stands out for being mindful of objectivity and a multitude of approaches to instructing GST, constructing curriculum, professional development, and achieving learning.”

In addition to Baker, article contributors included Sarah Battersby of the University of South Carolina, Sarah W. Bednarz of Texas A&M University, Alec M. Bodzin of Lehigh University, Bob Kolvoord of James Madison University, Steven Moore of the University of Redlands, Diana Sinton of Cornell University, and David Uttal of Northwestern University.

“All the authors sincerely appreciate this acknowledgement from the National Council for Geographic Education,” Tom Baker said. “Geospatial tools evolve rapidly, and our knowledge of learning processes with these tools needs to grow just to keep pace.”

Because of limited understanding regarding learning and GST, the agenda calls for a broad framework that is both systematic and replicable. Forthcoming studies should be evidence based, draw upon relevant theory, accurately describe the steps involved, connect concept and evidence, and apply to a range of settings and populations.

"Only cross-disciplinary, dynamic, and concerted research efforts will shed much-needed light on how we perceive, organize, understand, and communicate while learning with geospatial tools," Baker said. "We believe this agenda is one of the first significant steps in that direction and hope it encourages more researchers to incorporate the agenda in their future work."

NCGE will acknowledge the article contributors on Saturday, July 30 during its annual National Conference on Geography Education taking place in Tampa, Florida.

To read "A Research Agenda for Geospatial Technologies and Learning" in full, visit http://arcg.is/1X9nNfS.

# # #

About Esri
Since 1969, Esri has been giving customers around the world the power to think and plan geographically. The market leader in GIS, Esri software is used in more than 350,000 organizations worldwide including each of the 200 largest cities in the United States, most national governments, more than two-thirds of Fortune 500 companies, and more than 7,000 colleges and universities. Esri applications, running on more than one million desktops and thousands of Web and enterprise servers, provide the backbone for the world's mapping and spatial analysis. Esri is the only vendor that provides complete technical solutions for desktop, mobile, server, and Internet platforms. Visit us at www.esri.com.

Esri, the Esri globe logo, GIS by Esri, ArcGIS, http://www.esri.com, and @esri.com are trademarks, registered trademarks, or service marks of Esri in the United States, the European Community, or certain other jurisdictions. Other companies and products mentioned herein may be trademarks or registered trademarks of their respective trademark owners.

Press Information:
Karen Richardson, Esri
Tel.: 909-793-2853, extension 1-3491
E-mail (press only): press@esri.com
General Information: info@esri.com

UCGIS Awards its 2016 Education Prize to Diana Sinton

Ithaca, New York

March 31, 2016

UCGIS is pleased to announce that Diana S. Sinton will receive its 2016 Education Award.

Dr. Sinton has made extraordinary contributions to GIScience education in three key areas: connecting GIScience, cognitive science, and the learning sciences; promoting GIS across multiple curricula and disciplines; and developing GIS as an integrating technology linking curricula, infrastructure, and administration.

As the GIS program director for the National Institute for Technology and Liberal Education (NITLE) and subsequently as director of Spatial Curriculum and Research at the University of Redlands, Diana has made substantial contributions to national efforts to promote the use of GIS&T across the university curriculum. Throughout her career she has published widely about GIScience education at multiple levels. Importantly, Diana's writings provide key arguments for the ways in which spatial and geographical thinking contribute to both GIScience in higher education as well as learning overall. Her publication The People’s Guide to Spatial Thinking (NCGE, 2013) exemplifies these perspectives. She has also worked both here and in Europe on numerous curricular and professional development projects including the Spatial Citizenship Project (SPACIT) and as the creative lead for TeachGIS.org.

Dr. Sinton's impressive catalog of educational accomplishments is only half the story. Diana’s contributions to GIScience education also come through her qualities as an enabler as well as an advocate. Her persistence, clarity of vision, and collegiality have been instrumental in moving GIS&T forward in both K-12 and higher education.

Currently, Diana Sinton is an adjunct associate professor at Cornell University and also serves as UCGIS’s very own Executive Director.

UCGIS is a non-profit scientific and educational organization comprised of 60+ member and affiliate institutions. It was established in 1995 for the purposes of advancing research in the field of Geographic Information Science, expanding and strengthening multidisciplinary Geographic Information Science education, and advocating policies for the promotion of the ethical use of and access to geographic information and technologies, by building and supporting scholarly communities and networks. UCGIS is a hub for the GIS research and education community in higher education and serves as a national and international voice to advocate for its members’ interests.

Why mentoring matters: How Azavea and others are improving GIS education

Azavea designs initiatives and chooses projects that align with their business principles: to emphasize social responsibility and sustainability as much as profitability. That’s part of their being a B-corporation and applying geospatial technologies to make a social impact, including seeking and supporting clients that are non-profit organizations. Completed Azavea projects such as PhillyHistory, DistrictBuilder and OpenTreeMap are examples that characterize these principles. Developing and sharing OS software products such as GeoTrellis, designed to make spatial analysis functionality more widely available, further embodies their notions of sharing their technological expertise with others.

With their Summer of Maps program, first offered in 2012, Azavea has systematically added undergraduate and graduate students into this cycle. Students learn to appreciate the types of projects that non-profits might undertake, see how they frame their questions, and experience what it would be like to work with and for such an organization. Azavea imagined it would be an attractive and popular opportunity for students. Could they have anticipated that in the summer of 2015, there would be 175 applicants for 3 positions? 175! 3! Getting a position is like winning the geospatial job lottery.  

Many of the candidates not chosen for a summer spot with Azavea are competent, capable and eager to learn and contribute. What keeps Azavea’s CEO, Robert Cheetham, from expanding the program is his desire to match the energy contributed by the summer mappers with the availability of company staff. These positions aren’t just regular summer GIS jobs. Azavea sets out to create a “high-quality first professional experience” for the students, and for a company its size, that necessarily means a small set of students.

Each “Fellow” works on two different projects, and they are involved with every aspect of project management and execution: scoping the work, preparing budgets, performing analyses, managing timelines and preparing presentations. Throughout, they have seasoned professionals modeling best practices, giving them constructive and regular feedback, and pointing them in the right direction.

Cost of learning to use some GIS software? $59.95 for a tutorial book and $5 for a bottle of headache medicine.

Cost of building your GIS expertise during three months of individualized mentoring by senior staff? Priceless.

Effective mentoring blends advice, training, modeling, supporting and guiding. Mentoring can make the difference between barely and anxiously managing to scrape through the tasks for a single assignment versus excited dedication to a whole new career. All fields have their domain-specific knowledge areas, and all technologies have their idiosyncratic details. But someone who has been asked to “apply geospatial technologies to solve (or at least understand) a problem” might be required to know something about relational databases, map projections, coordinate systems, satellite ephemerides, digital image processing, obscure and possibly obsolete data compression formats, statistics, the modifiable areal unit problem, software licensing, web services and conventions of cartographic representation, not to mention spatial analysis and principles of geography and spatial thinking. For novices seeking to acquire substantial skills and knowledge in the geospatial domain, this is not a weekend workshop or one-semester class — which is why the role of mentoring is such a compelling one in this field. It harkens back to the days of being an apprentice to learn a trade. You weren’t expected to know it all at the beginning but you were expected to watch, learn, and practice, working alongside those who had the experience.

Any such program has its abusers. Months spent only pounding steel on an anvil is to a blacksmith apprentice what months spent only digitizing is to many GIS interns: hours that can be cruel and demoralizing. But done well, learning through an apprenticeship can be highly successful.

At Washington College in Maryland, the GIS Program has fully adopted and implemented this model, including its language. Undergraduate students serve at different levels of apprentice and subsequently earn their way up through the ranks to become journeymen and journeymen leaders, all together forming a community guild. Director Stewart Bruce deliberately seeks first-year students to join as junior apprentices because he’s seen how the students grow in the program, acquiring competence and confidence over the years, and become loyal and valuable employees. Though their program has grown to include professional staff as well, the students are what really have allowed their capacity to expand, as the more experienced ones mentor the less so. Each year the program loses approximately 20 seniors to graduation, and the geospatial professional world gains those highly-qualified students.

Now with funding from the Verizon Foundation, Bruce has also extended the guild model to provide geospatial opportunities for local youth. The METS Guild of Chestertown links weekend training for GIS with 3D visualization, gaming and web design, a seductive combination for many middle-school children. Eventually, Bruce envisions, these students may consider studying at Washington College itself, and the pipeline continues.

Mentoring models have also been implemented to help school districts and teachers take advantage of software donations. For example, last year Esri announced it would provide K-12 schools in the US with open access to its web-based mapping system, ArcGIS Online, through the national ConnectED initiative. But without knowing how to use GIS to support their instruction and students’ learning, the donation could be as helpful to a given teacher as handing them an anvil and a bunch of steel. To address these gaps, the American Association of Geographers and Esri have collaborated on the GeoMentors program, to facilitate connecting experts interested in volunteering their help with teachers in classrooms who are eager to begin learning.

The value of mentoring is undeniably powerful, but it is challenging to scale it up to benefit larger audiences. Relying on active professionals volunteering their time is not a sustainable practice, but it can work while other learning networks are established. Azavea hopes to increase their Summer of Maps’ capacity by securing additional program sponsorship and then incorporating the efforts of additional geospatial professionals as new mentors, resulting in more non-profits being served and continued high-impact learning experiences for more students.

Tis the season to think of others. If you have ideas and examples to share about the practice of mentoring within the world of geospatial technologies, share them with me —diana.sinton@directionsmag.com — and we may follow up with another article in the new year. 

Directions Exclusive: Ordnance Survey's Jeremy Morley, on R&D and digital navigational data

Direction Magazine’s Diana Sinton recently spoke with Jeremy Morley of the Ordnance Survey, Great Britain’s official mapping agency. Since its establishment over 220 years ago, the OS has contributed its cartographic expertise to the military, political, civil and social development of Great Britain. In this interview Morley touches on the role of research and education by the OS and imagines what roles digital navigational data will play in the future.

Q: Your title at OS is Chief Geospatial Scientist.  Is that a position that the Ordnance Survey has long offered?  What are the job responsibilities and expectations?  Could you describe a typical day, if there ever is one?

A: The title of the post at least is new. Ordnance Survey has been engaged in research internally and with the academic community for decades, and so has had a research manager to run that research. The new post aims to increase the visibility of the role.

For a number of reasons research has been restructured inside OS: We run innovation, R&D and research projects inside different divisions of the business, for example, examining innovative applications of the latest technology to deliver our products and services. My job is to engage in longer-term research, generally on a 3- to 7-year horizon, though we do carry out shorter-term projects too. This includes building up our capacity internally; working with and commissioning research with external partners, especially universities; and discussing and promoting our research interests with funders and associations around Britain. We engage in research to solve particular identified problems in collecting, managing, deriving or delivering our products and services. We also invest in future-oriented research, to understand future requirements and interfaces for our information – for example in smart cities, the Internet of Things and autonomous vehicles. An important expectation of this is that while we aim for high academic quality to the work, this research has to have effect inside the business, so knowledge transfer or spin-in of the results of the research is essential. This need not necessarily produce a new product per se but may result in new capabilities to create new products and services.

Some days, therefore, involve interacting with colleagues around the business, from the Commercial division who are in contact with customers, partners and the market, through Operations who run the factory to collect, manage and derive data, through Products and Innovation who define and create not only new products but also new platforms for delivery. On other days, I will be visiting university partners to discuss collaboration on existing and new products, or presenting our research or interests at workshops and conferences.

In addition to research, my team is interested in education with a span from early age education (“primary schools” in the UK) all the way through to supporting post-graduate (master's and doctoral) education. We have collective data agreements to license and deliver our data to the school, college and university sectors in collaboration with our partners at the University of Edinburgh’s EDINA Digimap service. We also aim to support the development of school curricula in Great Britain to recognize the importance of geographical thinking and GIS, and to support teachers in delivering effective education in these areas.

Q: For many people who may not be particularly current with the Ordnance Survey, the traditional impression may remain that it’s a very closed environment, with extremely strict guidelines about access and distribution of their authoritative national data for the UK.  If the culture or practices have changed at the OS in the last few years, in what ways and how? 

A:  Access to and licensing of OS data has changed greatly in the recent years. Most of our medium and small scale data products are now available as open data, with a standard UK Open Government License.  We have for many years provided easy and low cost access to products for research and education too, as I discussed above. And we’re busy working on new licenses and means to access our data to make experimenting with products and adopting them easier, including new APIs due soon. On top of this we’re working on SDKs to work with our data. With the release of these new access mechanisms we will be introducing a freemium license model, meaning that developers and home users will be able to access even our large-scale data in limited quantities for free. As is usual in the industry we will begin charging when the volumes of data being accessed exceed certain thresholds. We hope that these developments will further enable customers’ and citizens’ access to our data.

Q: Two big trends in the world of geospatial data are the growing popularity of, and reliance on, open-source software and the incorporation of crowd-sourced solutions to data development and curation.  How have either of these been incorporated into the OS?

A: We are interested in both these areas. However the biggest impact for us has been from open source. Most of our core systems still rely on the power of vendor software solutions but equally, we do use open source elements too. A mark of our interest in these technologies was our sponsorship of FOSS4G in the UK in 2013 at the top, Platinum level.

Crowd-sourcing is interesting but is not something we’ve adopted as a core part of our data collection system so far. Our products and reputation are built on high quality, consistent, national coverage and we’re still working on how crowd-sourcing best fits in that environment and best complements our use of surveying and photogrammetric data collection. An element of this to explore is the expert crowd, where we train or work with a limited pool of experts to provide information.

Q: How have you been able to integrate your own background as an educator of geospatial sciences and technology into your current position?  How about your own research ideas?

A: I have previously worked in MSc education in Geographic Information Science at both University College London, where I ran their program at the end of the 90s, and more recently at the University of Nottingham. I helped develop and then ran a specialized undergraduate course at UCL in Geospatial and Environmental Information Management, which led into integrating geospatial engineering content into UCL’s Civil Engineering degrees. As a member of faculty at both universities I’ve directly supervised over a dozen PhD students and been involved in educating first-year digital economy PhD students in Nottingham’s Horizon Centre for Doctoral Training. This range of experience is invaluable in understanding the constraints and operation of GIS and geospatial education in the UK, and what faculty staff will find useful to support their teaching at different levels.

My research experience firstly means that I understand the measures of success that motivate faculty staff in engaging in research with us and the funding landscape within which we can either bid together with universities or provide support to their bids. My move to Ordnance Survey was motivated by the compatibility in my research interests, in the technical infrastructure of interoperability and internet services, in the characteristics of crowd-sourcing and the human factors in geospatial systems, and the role of GIS in the new world of smart cities.

Q: What upcoming activities or projects at the OS are you excited about?  Which ones give you trepidation? 

A:  A really exciting area that we’re exploring is that of connected and autonomous vehicles. This is a rapidly developing area of technology which promises to radically affect patterns of transportation and the market of cars and vehicles in general. What will be the role of geospatial information in these systems? Will autonomy be driven primarily by sensors and computer vision techniques of matching images and points clouds to specially gathered databases of images and point clouds taken in a range of road conditions? Or will solutions dominate that use GNSS and geospatial databases as the primary reference, augmented by sensors to read exact road conditions, vehicle position and obstructions? How will these systems interface to public and private information infrastructures— for example, to find parking spaces — and to financial systems — for tolls and charges? Will we see a series of closed ecosystems from each manufacturing group or technology provider, or will some interoperability or national infrastructures emerge? Exciting times!

I’m not sure trepidation is the right word, but an area that still requires better definition is that of 3D products. It seems that there are growing market requirements, for example to feed building information modeling or city energy use analysis, but the exact specifications for a profitable, maintainable and national product remain an elusive topic for research and development.

Communicating with Maps Part 3: Considering uncertainty and error

An Exclusive Directions Magazine Series

In the third part of our series on Communicating with Maps, Diana Sinton discusses the complex and important ideas about the inherent role of uncertainty in the maps we produce. As a means of communication, published maps are trusted by the public well beyond what they may have earned. My theory is that so few people have ever made maps that they have no sense of how the data might have been collected, what decisions could have been made during map design, and how many opportunities for error the whole process provides, that they just accept a published map at face value.

But, if you were to hand someone a blank piece of paper and ask them to draw their hometown, the experience would be revealing. They may recall some topological relationships well — such as the sequence of streets between their home and school, or how to get to a friend’s house — but most people would also experience a tremendous amount of uncertainty. Maybe the results would include locational errors (drawing the school north rather than south of an intersection), or an attribute error (labeling a building a post office when it was really a bank). Just as likely, there would be blank areas in the sketch. Through this experience, the mapmaker would become aware of terra incognita and uncertainty about what was where.

In a similar way, every map contains imperfections. In his iconic book, Mark Monmonier explains how we lie with maps through manipulations and distortions, deliberate or otherwise. Uncertainty, errors, mistakes and omissions are inevitable. The complexity of the natural and social world must necessarily be simplified and generalized to be mapped, and there are necessarily subjective decisions that are made in the map design process. That’s just the way it is, even though few are aware of it.

Meanwhile, maps continue to be the most popular and common form of graphic representations of our natural and social world. They’re used worldwide in decision-making processes every day. That won’t change, but more could be understood about uncertainty and error within the realm of geospatial information.

The analog of statistics

Similar problems exist in the world of numbers. For example, a probability is a derived calculation of the likelihood of an event occurrence. The likelihood of any particular event outcome depends on how many total outcomes are possible. Statisticians use numerical confidence intervals to communicate the idea of how much variability there could be in the outcomes if one were trying to replicate that same measurement, pattern, etc. Graphically, confidence intervals can be represented as error bars depicting the possible variability around a measured value. Probabilities, confidence intervals and error bars are ways that we communicate about the uncertainty of measured, quantitative values in the social and natural world. Recognizing and acknowledging this uncertainty is part of the scientific process, though that can be a difficult message to accept.

There are equally as many ways that uncertainty, and error, are part of the mapping process, and standards exist for how to measure and document it. The National Standard for Spatial Data Accuracy, which in the late 1990s replaced the 1940s National Map Accuracy Standards, applies a root-mean-square-error approach, together with 95% confidence intervals, in determining the positional accuracy of geospatial data. Take a dataset of X and Y point coordinates that fall at the center of two intersecting roads and compare the distance to the same point coordinates already accepted as being true (because they were derived by high accuracy methods or by an independent source, for example). Once the RMSE is calculated between these two datasets, the NSSDA explains that:

"Accuracy reported at the 95% confidence level means that 95% of the positions in the dataset will have an error with respect to true ground position that is equal to or smaller than the reported accuracy value. The reported accuracy value reflects all uncertainties, including those introduced by geodetic control coordinates, compilation, and final computation of ground coordinate values in the product."

Requiring data to meet standards is one approach to managing uncertainty and reducing the probability of errors. Although assessing potential errors in data sets can be a challenge, undertaking such quality control efforts can build trust in an organization. A good example of this is the European Marine Observation and Data Network, which requires anyone contributing data to complete a Confidence Assessment step in the submission process.

Scale

One way to tolerate and mitigate uncertainty is modifying scale. Measurements of sinuous perimeters, such as coastlines, will vary significantly depending on the length of the unit of measurement. There is power in method, and more specific methods are perceived to be more powerful. Modern mapping is filled with situations where our methods don’t align with our measurements, tools or objectives. Our version of measuring with a micrometer, marking with chalk and cutting with an axe could be measuring with a smart phone, marking by heads-up digitizing and clipping with an XY tolerance of inches. Our use of geospatial data at particular scales, resolutions and precisions should be informed by and in alignment with our mapping intent, our acceptance of error and our tolerance for uncertainty. Mike Bostock illustrates this deftly with his explanation of geometric line simplification, and John Nelson reminds us of how absurdly false the decimal-place values of precision can be.

Cartographic solutions

Modifying scale or aggregating data may mask some types of uncertainty, while applying alternative cartographic solutions may be less of a compromise. For decades, cartographers have experimented with map symbols that are fuzzy, indistinct or partially transparent to indicate to the viewer that there is some degree of uncertainty associated with those corresponding data. Essentially these are cartographic versions of statistical box plots, which themselves can also become fuzzy to illustrate variability. Research has shown that certain types of visual variable characteristics, such as color intensity, value or edge crispness, are more effective at communicating uncertainty than assigning different shapes or sizes. Unfortunately, novel cartographic solutions such as manipulating common borders between polygons to suggest an uncertain zone of transition are more readily achieved with drawing than with mapping software at this point.

Choosing how to label values in a map legend can also give evidence as to how confident one is in the values. Select decimal place values that are appropriate for the data in question, and opting for a more vague and relative description, may be the right approach. “Lower” and “Higher” may be just the right way to describe the spectrum of data values being shown, particularly for mapping modeled probabilities such as erosion or wildfire risk.

Concluding thoughts

Sharing news about uncertainty in maps isn’t meant to bring a mapping effort to a grinding halt. Uncertainty within mapping is a given; ignoring it only promotes misuse of maps and undermines the credibility that they do deserve. Instead, expanding awareness may help us develop more effective ways to communicate information to map users and readers. It just goes back to the intent of the map. For example, current research is underway to determine effective techniques for deliberately adding uncertainty and errors to mapped data so that privacy and confidentiality of the data can be maintained while valid patterns are still displayed.

An additional benefit to expanding awareness about uncertainty and errors in maps and mapping processes, is the developing problem of location fraud within the world of location-based services. Or, as this article is quick to point out, the fact that fraud is only one source of location inaccuracy that the business world is realizing it must confront. There is a whole new commercial audience out there that needs to know about minimizing error and uncertainty in the world of mapping and spatial analysis.

Our exclusive series, Communicating with Maps:

Communicating through Maps Part 1: Exploring the challenges and complexities of GIS mapping 

Communicating with Maps Part 2: Discussing the issues with CaGIS President Sarah Battersby

Communicating with Maps Part 3: Considering uncertainty and error

Selected References for Communicating with Maps

Communicating with Maps Part 2: Discussing the issues with CaGIS President Sarah Battersby

An Exclusive Directions Magazine Series

In the second part of our summer series on Communicating with Maps, Diana Sinton discusses issues and advances in mapmaking with cartographer Sarah Battersby, a research scientist at Tableau Software and currently the president of the Cartographic and Geographic Information Society.

Q: What are some of the key developments that you have seen in cartography in the last decade?

A: I think that one of the most exciting cartographic developments in the last decade is the explosion of online mapping and tools for map design. It’s amazing to think about the huge efforts that have gone into making it easy for people to visualize their spatial data, whether as a Google Map mashup, using desktop or online GIS, with d3 or other scripting libraries, etc. The downside to all of this is that I think it is still too easy to make a bad map, and way too easy to distribute that bad map to a wide audience. My cartographic archive of what not to do just keeps growing thanks to all of the great finds on Twitter and Facebook.

On the other hand, there are a lot of people who really care about helping others work with and understand spatial data and there is some great research in cartography, GIScience and in spatial thinking that I think will help shape the next generation of tools that we use to design maps to make them more intuitive, more beautiful and generally more effective for understanding spatial data.   

The growth of the open source geospatial community has also been impressive. It is exciting to see so many people dedicated to improving the world of geospatial data and technology, and to helping the world with geospatial, like the work coordinated by the Humanitarian OpenStreetMap Team.  I think this open source momentum is key in the future of cartography and GIS.

Q: People often bring up the issue that Web Mercator is used as a default projection with web maps. That creates a tension with all of us who were taught in cartography and GIS classes that the Mercator projection is almost always inappropriate for the maps we’re making; it grossly distorts areas toward the poles and is presumed to give people false ideas about the size of countries and even continents. How much of a problem is this really? Can we cross fear-of-Mercator off of our worry list?

A: A few years back I did a bit of “forensic cartography” research on this to try to figure out how Web Mercator became the standard, and I think it is because of the success of Google Maps — the projection is even occasionally referred to as “Google Mercator.” Other online mapping systems changed projection to match. I’m not sure what the logic was behind the original selection of the projection, but it is easier to tile a rectangular projection, and the equations for Mercator are simple.  The conformal property of the projection is also nice for local-scale mapping.  But…is it the only choice? I imagine that any rectangular projection should tile nicely, and I imagine that it won’t be too many years before we have online mapping systems that don’t tie us to a single projection. For instance, Bernie Jenny has done some amazing work with adaptive map projections.

As for the distortion in the Web Mercator projection, I think this is a significant issue for visual analysis.  I’m a big believer in one of Egenhofer and Mark’s principles of Naïve Geography, that “maps are more real than experience.” I have thought of this as the map becoming our source of truth; even if people know that there is distortion in the map I think there are very few people who can successfully compensate for it in reading the map. This is a significant problem for any distance or area-based analyses calculated in Web Mercator coordinates, as well as for the map reader trying to visually make sense of spatial patterns.

I definitely wouldn’t cross Web Mercator off of our list of things to worry about. It is imperative for map designers to be actively thinking about and addressing issues with projection, otherwise their analyses may be hugely incorrect.It is also important for map readers to be cognizant of the distortions in Web Mercator and other projections. I don’t mean that I expect people to be able to identify and calculate distortion, just to maintain a healthy skepticism with their map reading.  

Q: What do you think are the top “gotcha” issues for mapping today, from the perspective of a cartographic software designer? What about from the perspective of John Q. Mapmaker?

A: I think that every cartographer has a set of pet issues that they always look for. For me, I often focus on classification and data normalization. It drives me crazy when I can’t figure out how the mapmaker decided to break up the data into classes. Are those quantiles? Natural breaks? Do the breaks have meaning? Class breaks make such a huge difference in the resulting pattern on the map and it drives me crazy when I see the default 5-class natural breaks map without any explanation. To me this is the sign that the mapmaker doesn’t know much about the data.  

I also see way too many maps that are really just population maps. Should it be a surprise that locations with more people tend to have higher counts of all sorts of other attributes? This is another problem of not thinking enough about the data. If you don’t know your data well, how do you make a map that tells a clear — and appropriate — story? 

Q: You have the perspective of having taught students about mapmaking for many years, and have done much basic research in cartography. Now you are in the position of working with software designers to help them implement good mapmaking principles to help users of commercial software design more effective maps. How is this shift from basic to applied research working? How has it changed how you pose research questions?

A: It is great to focus on specific, applied problems tied to facilitating how people ask and answer spatial questions. There is still much to think about in terms of general cartography, but now that we’re at a time when it is so easy for anyone to take a dataset and turn it into a map, I think about what we can do to help people make better maps faster. My research has always focused on how people understand and use spatial data, so there hasn’t been a change in my research direction, but I have done a lot of stepping back to what I would call the “cartographic primitives.” Lately I’ve been doing a lot of thinking about very basic questions of what information we need to obtain from maps and what characteristics of a map would facilitate finding answers to these questions. I also spend a good bit of time thinking about what makes an interesting pattern on a map and how I can help someone make better choices about their map type, colors or classification to uncover these interesting patterns.

Essentially, I feel like the questions I face now are based on how we can take our collective research and applied knowledge about designing better maps and put it to use helping people that don’t have decades or even semesters of work in cartography. It’s an amazing challenge and hopefully I can do some good to help the world see and understand their spatial data more effectively.

Our exclusive series, Communicating with Maps:

Communicating through Maps Part 1: Exploring the challenges and complexities of GIS mapping 

Communicating with Maps Part 2: Discussing the issues with CaGIS President Sarah Battersby

Communicating with Maps Part 3: Considering uncertainty and error

Selected References for Communicating with Maps

Elevation data: Where to go and what to know

Digital representations of the surface of the earth are a key data set for many GIS projects, but locating, identifying, downloading and manipulating digital elevation data is not for the faint of heart. There are many different skills required and hundreds of tools, systems and instruments from which to choose. In this article, author Diana Sinton highlights available resources and need-to-know information.

Introduction to the digital elevation model

The most common form of digital representation of the surface of the earth is presented as values of elevation above sea level, often derived from sampled point measurements and represented in raster formats as a digital terrain model or digital elevation model (DEM), or as a vector triangulated irregular network (TIN). Apart from generating a topographical surface itself, these data are also the basis for deriving slope gradient, slope aspect and hillshade relief.  Digital elevation data are central to transportation planning, land use planning, and geological and hydrological analyses, among countless others.  For this article, we’ll focus on DEMs as a generic format of elevation data in digital form.

For many years, the most common source and scale for a DEM were the 10-meter and 30-meter resolution data organized and distributed by the US Geological Survey to align with their 7 ½ minute topographic quad sheets. These original DEMs were derived from traditional photogrammetric methods or reverse-engineered from contour lines. Errors and inaccuracies abound. Nine times out of ten, one’s area of interest was situated at the intersection of four quad sheets, so there was great rejoicing when it became possible to download “seamless” elevation data, foregoing the need to edge-match or mosaic multiple data sets together. 

Measuring the horizontal resolution of elevation data often refers to spherical units of arc seconds, or 1/3600 of a degree. One arc second represents approximately a 30-meter grid cell.  Accordingly, a one-third arc second of measurement is approximately ten meters in distance, and a one-ninth arc second is three meters. However, these measurements hold true at the equator, when both latitudes and longitudes are evenly spaced.  Once distances are measured towards the poles, longitude measurements begin to converge and regular grid spacing becomes distorted.  By the time one is measuring in arc seconds at 49 degrees latitude, an arc second of longitude has shrunk to 20.25 meters and grid cells have become elongated in shape. 

Becoming familiar with the arc second system of horizontal measurements is a worthwhile investment of time when navigating elevation data sites, but it may be even more important to understand the absolute and relative vertical errors within DEM data. The original production goal of the 7 ½ minute USGS quads included a vertical accuracy standard of 7 meters, and up to 15 m variability was permitted (USGS Data Users Guide, pdf).

DEM meets Big Data in the US

Fast forward to 2015 and digital elevation information has intersected with the Big Data movement. In the United States, the National Elevation Dataset (NED) has replaced the former system of quad-based DEMs.  Significant efforts have been made to ensure that the horizontal and vertical datums, elevation units and projections or coordinate systems have been made consistent or, where needed, optimized for that locale. Root mean square errors for vertical accuracy have fallen to less than 2 meters within much of the NED collection.  Light Detecting and Ranging, aka LIDAR, data, and interferometric synthetic aperture radar, aka IfSAR, have become the standard approaches for high resolution data collection, and this has allowed for improvements and upgrades throughout the United States. Unlike the bare-earth presumption of DEM data, these new sources also provide detailed data for what is on the surface of the earth, for example the heights of vegetation and structures. The use of new technologies has been particularly important in states such as Alaska, where conditions had never previously permitted consistent and high quality data to be collected.

Of course there are times when it is both desirable and necessary to access older data, particularly when needing to make comparisons between before-and-after geomorphic changes following earthquakes and volcanic eruptions. For such purposes, the USGS also maintains a collection of historic DEMs.

Global data resources

When elevation data outside of the U.S. is needed, two important sources include data derived originally from NASA’s Shuttle Radar Topography Mission, as well as the more Advanced Spaceborne Thermal Emission and Reflection Radiometer global digital elevation model, now at Version 2.  Since its original collection in the year 2000, the SRTM data has been corrected and revised, and its 90-meter resolution coverage is some of the most comprehensive world-wide.  ASTER's Global DEM data has also undergone revisions and corrections, and its one arc second, 30-meter, resolution extends to even broader global coverage. 

New satellite technologies and demand for higher resolution and more consistent data are driving the growth in digital elevation data advancement today.  In 2010, DLR, Germany’s national aeronautics and space research center, launched the TanDEM-x satellite to partner with the already-orbiting European TerraSAR-X and is now producing data designed to be high resolution, with great vertical accuracy, and as consistent and reliable as possible in their coverage.

In the U.S., the current 3D Elevation Program has brought together multiple funding entities to produce and distribute nation-wide LIDAR data coverage, with IfSAR-based data in Alaska. Acquiring and processing these data will take years, but there is wide agreement that it is a wise investment with extensive benefits for the public and private sectors alike. The specter of sea level change has also compelled NOAA to prioritize LIDAR-based topographic data for coastal regions

Locating, identifying, downloading and manipulating digital elevation data is not for the faint of heart.  New interfaces for data discovery such as Reverb|ECHO come complete with 317 platforms, 658 instruments and 717 sensors from which to choose. Even the simpler National Map and Earth Explorer assume that users are familiar with the optimal spacing of LIDAR point clouds, arc second measurements, and the deciphering of acronyms.  OpenTopography is specifically designed to lower the access barriers to high resolution data, but to date the availability is limited. 

My advice? Give yourself plenty of time to sort out what’s available for your area of interest and what you really need for your project or application. Being able to find exactly the data you seek, download it, figure out and manipulate its compression format, modify its projection or coordinate system and successfully add it to your project is likely to require persistence, patience and the knowledge of a rocket scientist.  Or two. 

UCGIS Tackles Geographic Information Science in the 21st Century

The University Consortium for Geographic Information Science (UCGIS) was established in 1995 to advance research in the field of Geographic Information Science and to strengthen its use in education and advocate for its ethical use by growing scholarly communities and networks. In July of this year Diana Sinton became its latest executive director. Directions Magazine asked her about the organization and its latest challenge, the GIS&T Body of Knowledge, version two.
 
Directions Magazine (DM): The University Consortium for Geographic Information Science promotes and advocates for research and education in the field. What exactly is geographic information science? Is that an academic/research term or should geographic information systems practitioners be using it too?
Diana Sinton (DS): Geographic information science refers to the knowledge of how geographic information can be represented, modeled, analyzed, understood and reasoned with, etc. No geographic information system could exist without someone having applied that type of knowledge to the design and building of the GIS, and there is also a science behind how GIS is used to support spatially-based decisions. The term “science” shouldn’t be off-putting to practitioners, and it’s the best word for this collection of information. It simply references information that can be systematically explained and applied. Geographic information science contributes to all of the functions behind our geographic information systems. 
DM: UCGIS grew out of the U.S. National Science Foundation’s establishment of the National Center for Geographic Information and Analysis (NCGIA) back in 1988. How do the challenges UCGIS faced back then compare to those on the docket today?
DS: When UCGIS became incorporated as a non-profit organization in 1995, far fewer people appreciated the important role that geospatial data and technologies could play in the world. In the intervening two decades, both knowledge and applications have spread and there is less need to convince anyone in industry, science and government, for example, about the importance of these important areas. Geospatial data are firmly part of the Big Data movement today. 
 
However, UCGIS works directly with and on behalf of institutions of higher education, and the messages about GIScience are not as widely recognized and appreciated with that audience. Understanding the value and opportunities around geographical thinking and perspectives can be a hard sell in academia. Moreover, the economic crisis of the last few years has eliminated much of the discretionary funding that institutions and government agencies used for organizational memberships in the past. Our operating budget comes almost entirely from dues payments, providing relatively little long-term security at this point. Thus, aiming to diversify our sources of income is now an element of our long-term planning. 
 
Twenty years ago, GIS and GIScience were tiny players on a university campus. UCGIS was founded primarily by the most active and heavy hitters at large, public universities, faculty with steady and ambitious research agendas. They most often represented a single department on their campus, probably the geography department. Fast forward 20 years, and the GIS presence on campuses is wholly different. At large institutions, scholars involved in GIScience-informed research are likely to be active and present in multiple departments, branching way out from geography alone. Because of the growing interest in GIS as an entry-way to learning in many disciplines, institution-wide GIS centers are even common among UCGIS member schools. This abundance itself can even be a challenge to manage, and to leverage. As spatial analysis and geographic data visualization become more common-place, how does the role of GIScience evolve and continue to be relevant? This is both a challenge and opportunity for our member institutions, and therefore for UCGIS too. We discussed some of the specifics in this overview article on GIS use and adoption published last year in Directions Magazine.
DM: The vast majority of UCGIS members are U.S. colleges and universities. Should other organizations consider joining? Why?
DS: Colleges and universities will continue to comprise our core set of members, but our affiliate membership plan is designed with other organizations in mind. UCGIS holds an important spot at the nexus of where GIScience and GIS&T meet up within higher education venues. It’s our mission to stay current on the issues that affect GIScience research and education: policies and legislation, trends and practices, curriculum and workforce demands. Being part of UCGIS means having a seat at the table, becoming part of the community of practice that not only values these issues, but is well-informed about them. Our relatively small size allows us to be nimble and reactive, as well as strategic and proactive. We facilitate networking and outreach, and seek opportunities for creative and effective partnerships with industry, government and the private sector, when the projects are aligned with our mission and in the best interest of our members. 
 
If it’s important to a group or organization or institution to know they can reach and engage with this audience, those at the intersection of GIS&T and higher education, then involving themselves with UCGIS is an obvious choice. We welcome inquiries about new memberships.
DM: One key initiative of UCGIS is a revision of the Geographic Information Science & Technology (GIS&T) Body of Knowledge from 2006. It’s available as a free PDF, with support from Esri and AAG. The new version, to be known as BoK2, is expected in 2015. Why do we need an update? 
DS: The only thing constant is change, and that’s certainly true within a discipline that focuses on geospatial technologies. Curricula and the knowledge on which it’s built have to accommodate the changes within our discipline: new ways of creating and contributing data (VGI, crowdsourcing, new sensors, etc.) and news ways of engaging with technologies (mobile mapping, location-based services, etc.). No one ever intended the first version of the BoK to be the forever-version. It was a necessary first step, and its authors and UCGIS have known from the beginning that it would be revised at some point. Every effort is being made to have these processes be both transparent and participatory. 
 
The BoK2 project will let us graduate from a paper-based, book format to an online platform that better facilitates interaction with the content, to explore and discover connections and learning pathways that are not now readily possible. This is also an opportunity to bring other voices into the creative authorship mix, and make strategic design decisions. We expect the new platform to include a sustainable information architecture and an infrastructure to allow for new content curation strategies. We want to improve the ways in which people can extract the particular knowledge that is most meaningful to them, such as natural learning communities, subject matter experts, and diverse groups of educators. More attention will be paid to alignment with the Department of Labor’s Geospatial Technology Competency Model, not because these two collections serve competing purposes, but because both represent efforts to benefit the GIS educational community and the workforce that relies on GIS&T. 
DM: What is UCGIS’ role in developing the new document? What is the process to create the new version and how can practitioners and other interested parties participate?
DS: As the copyright holder of the original and future versions, UCGIS has taken the lead role in guiding this revision process. In late 2012, we asked John Wilson, of the University of Southern California’s Spatial Science Institute, to direct the multi-year project for us. Since that time, several workshops and information gathering sessions have been held with different groups of stakeholders, and John has now organized a 25-member Steering Committee that is to begin an 18-month-long process of discussions, contributions and development. There will be several meetings held at which interested parties will be able to share their ideas and have their voices heard, including at the 2014 AAG conference in Tampa and the 2014 UCGIS Symposium in Pasadena.  On the UCGIS website, we will be building a page dedicated to the BoK2 project where we will share status updates and provide a chance for the curious to post questions and comments.