Book Review: Learning ArcGIS Geodatabases [eBook]

Title: Learning ArcGIS Geodatabases
Author: Hussein Nasser
Publisher: Packt Publishing
Year: 2014
Aimed at: ArcGIS – beginner to advanced
Purchased from: www.packtpub.com

Learning ArcGIS Geodatabases

After using MapInfo for four years my familiarity with ArcGIS severely declined. The last time I utilised ArcGIS in employment shapefiles were predominantly used but I knew geodatabases were the way forward. If they were going to play a big part in future employment it made sense to get more intimate with them and learn their inner secrets. This compact eBook seemed like a good place to start…

The first chapter is short and sweet and delivered at a beginner’s level with nice point to point walkthroughs and screenshots to make sure you are following correctly. You are briefed on how to design, author, and edit a geodatabase. The design process involves designing the schema and specifying the field names, data types, and the geometry types for the feature class you wish to create. This logical design is then implemented as a physical schema within the file geodatabase. Finally, we add data to the geodatabase through the use of editing tools in ArcGIS and assign attribute data for each feature created. Very simple stuff so far that provides a foundation for getting set-up for the rest of the book.

The second chapter is a lot bulkier and builds upon the first. The initial task in Chapter 2 is to add new attributes to the feature classes followed by altering field properties to suit requirements. You are introduced to domains, designed to help you reduce errors while creating features and preserve data integrity, and subtypes. We are shown how to create a relationship class so we can link one feature in a spatial dataset to multiple records in a non-spatial table stored in the geodatabase as an object table. The next venture in this chapter takes a quick look at converting labels to an annotation class before ending with importing other datasets such as shapefiles, CAD files, and coverage classes and integrating them into the geodatabase as a single point of spatial reference for a project.

Chapter 3 looks at improving the rough and ready design of the geodatabase through entity-relationship modelling, which is a logical diagram of the geodatabase that shows relationships in the data. It is used to reduce the cost of future maintenance. Most of the steps from the first two chapters are revisited as we are taken through creating a geodatabase based on the new entity relationship model. The new model reduces the number of feature classes and improves efficiency through domains, subtypes and relationship classes. Besides a new train of thought on modelling a geodatabase for simplicity the only new technical feature presented in the chapter is enabling attachments in the feature class. It is important to test the design of the geodatabases through ArcGIS, testing includes adding a feature, making use of the domains and subtypes, and test the attachment capabilities to make sure that your set-up works as it should.

Chapter 4 begins with the premise of optimizing geodatabases through tuning tools. Three key optimizing features are discussed; indexing, compressing, and compacting. The simplicity of the first three chapters dwindles and we enter a more intermediate realm. For indexing, how to enable attribute indexing and spatial indexing in ArcGIS is discussed along with using indexes effectively. Many of you may have heard about database indexing before, but the concept of compression and compacting in a database may be foreign. These concepts are explored and their effective implementation explained.

The first part of the fifth chapter steps away from the GUI of ArcGIS for Desktop and ArcCatalog and switches to Python programming for geodatabase tasks. Although laden with simplicity, if you have absolutely no experience with programming or knowledge of the general concepts well then this chapter may be beyond your comprehension, but I would suggest performing the walkthroughs as it might give you an appetite for future programming endeavours. We are shown how to programmatically create a file geodatabase, add fields, delete fields, and make a copy of a feature class to another feature class. All this is achieved through Python using the arcpy module. Although aimed at highlighting the integration of programming with geodatabase creation and maintenance the author also highlights how programming and automation improves efficiency.

The second part of the chapter provides an alternative to using programming for geoprocessing automation in the form of the Model Builder. The walkthrough shows us how to use the Model Builder to build a simple model to create a file geodatabase and add a feature class to it.

The final chapter steps up a level from file geodatabases to enterprise geodatabases.

“An enterprise geodatabase is a geodatabase that is built and configured on top of a powerful relational database management system. These geodatabases are designed for multiple users operating simultaneously over a network.”

The author walks us through installing Microsoft SQL Server Express and lists some of the benefits of employing an enterprise geodatabase system. Once the installation is complete the next step is to connect to the database from a local and remote machine. Once connections are established and tested an enterprise geodatabase can be created to and its functionality utilised. You can also migrate a file geodatabase to and enterprise geodatabase. The last part of Chapter 6 shows how privileges can be used to grant users access to data that you have created or deny them access. Security is an integral part of database management.

Overall Verdict: for such a compact eBook (158 pages) it packs a decent amount of information that provides good value for money, and it also introduces other learning ventures that come part and parcel with databases in general and therefore geodatabases. Many of the sections could be expanded based on their material but the pagination would then increase into many hundreds (and more) and beyond the scope of this book. The author, Hussein Nasser, does a great job with limiting the focus to the workings of geodatabases and not veering off on any unnecessary tangents. I would recommend using complimentary material to bolster your knowledge with regards to many of the aspects such as entity-relationship diagrams, indexing (both spatial and non-spatial), Python programming, the Model Builder, enterprise geodatabases and anything else you found interesting that was only briefly touched on. Overall the text is a foundation for easing your way into geodatabase life, especially if shapefiles are still the centre of you GIS data universe.

Book Review: The ESRI Guide to GIS Analysis Vol. 1: Geographic Patterns & Relationships

Title: The ESRI Guide to GIS Analysis Vol. 1: Geographic Patterns & Relationships
Author: Andy Mitchell
Publisher: ESRI Press
Year: 1999
Aimed at: GIS/Analysts/Map Designers – beginner
Purchased from: www.wordery.com

GIS Analysis Vol 1

This textbook is a companion text for GIS Tutorial 2: Spatial Analysis Workbook (for ArcGIS 10.3.x) where you can match up the chapters in each book. Although not a necessity, I would recommend using both texts in tandem to apply the theory and methods discussed with practical tutorials and walkthroughs using ArcGIS.

The title of this book might lead you to believe that ArcGIS will feature heavily throughout the text but Michael F. Goodchild sets this straight in the Preface by stating that he applauds ESRI for backing this book even though it isn’t Arc eccentric. The author, Andy Mitchell, presents the material as generic GIS such that most GIS software packages should be able to utilise the techniques discussed.

Chapter 1 is a short introduction to what GIS analysis is, understanding the representation of geographic features in a GIS, and the common attributes associated with geographic features that allow for analysis. The wording is simplistic in nature and easy to follow, and acts as a good entrance to the rest of the book.

The second chapter begins to delve into the realm of visual analysis, using your brain to to discern patterns for a better understanding of the data and the area that you are mapping. Several real-life mapped examples are displayed to show how ‘mapping where things are’ aids in more focused decision making. The chapter steps through; deciding what to map, preparing your data, and making your map, with comparison figures to show you why you might perform such tasks.

Why map the most and least? Because mapping features based on quantities adds an additional level of information beyond simply mapping the locations of the features and this notion is made clear from providing some real-life examples in Chapter 3. The author then takes us down a path to understanding quantities and the importance of knowing the type of quantities that you are mapping, and this naturally leads onto the next topic of classification, why use classes? and choosing an appropriate classification method/scheme for the purpose of your data. It is important to understand how classification methods such as Natural Breaks (Jenk’s), Quantile, Equal Interval, and Standard Deviation classify your data and having a general guideline on choosing the appropriate method.

A great recurring aspect in this book is that every chapter begins with a question and Chapter 4’s is ‘Why Map Density?’ and then proceeds to answer the question and the methods available for mapping in a GIS. This chapter discusses density for defined areas, dot density mapping, and density surfaces, what the GIS does to create them and the results of the output.

The fifth chapter takes a look at mapping what’s inside an area, discusses why you would want to map inside an area?, and some analysis and results that can be derived from such. Do you need to map a single area to find what’s happening inside or multiple areas to analyse what’s happening inside each for comparison purposes? Methods are explained along with how the GIS performs these for analysis. You might want to find out if a certain feature is within an area, a list of all features inside an area and a count of each, or the sum of a designated land type area within a boundary for examples. Summaries and statistics can also be generated from what is found inside an area boundary.

Having assessed some simple techniques for mapping what’s inside an area, the next chapter casts it’s attention towards finding what’s nearby. People often think of nearness in straight lines or along transport networks, but GIS is also useful for travel cost analysis giving weight to different land use or soil types for example when considering the path for a pipeline. Nearness by straight-line distance, distance/cost over a network, and cost over a geographic surface are discussed in detail. At this point we are venturing into understanding some of the concepts behind Network Analysis.

The last chapter looks at mapping change with regards to change over time for time pattern analysis. Three ways of mapping change are presented; creating a time series, creating a tracking map, and measuring change, along with the considerations required when creating each type for change in discrete features, events, summarized areas, and continuous categories and values.

Following the last chapter there are some recommendations for some further reading.

Overall Verdict: The perfect companion for a GIS student embarking on their geospatial educational quest. The theory behind GIS is essential for accurate analysis and troubleshooting. This book is an easy read with a plethora of figures and maps utilised in real-life situations found in each chapter to aid in the experience. Although getting closer to being two decades old this text stands the test of time and acts as a solid base for a foundation in simple analysis using a GIS to find patterns and relationships.

The only shortcoming of a text of this nature is that you cannot see how methods and techniques discussed are performed in a GIS. This is where the companion text GIS Tutorial 2: Spatial Analysis Workbook (for ArcGIS 10.3.x) comes in and aids in providing walkthroughs to further enhance your understanding of the underlying theory.

Next: see The ESRI Guide to GIS Analysis Volume 2: Spatial Measurements & Statistics

The Web Mercator Visual and Data Analysis Fallacy

How many of you have looked at a web map with a Google Maps or OpenStreetMap basemap, you know the one where Greenland looks like it’s the size of South America? Recently, I saw one of these maps with buffer zones spread across the United States. Each buffer was the same size indicating that each buffer zone represented a similar sized area of the Earth’s surface, as you’d expect, a 1000km radius buffer zone is a 1000km radius buffer zone! However, if Greenland is looking a similar size to South America, then more than likely the map is displayed using a Web Mercator projection (EPSG: 3857 or 900913) and the further you move away from the equator the more inaccurate and false those same sized 1000km buffer zones become.

Web Mercator

Click to enlarge. Web Mercator map with 1000km buffer zone around selected cities.

Ok, let’s take a slight step back here for a moment and look at what a projection is. A projection is the mathematical transformation of the Earth to a flat surface. The surface of the Earth is curved, maps are flat so a projected coordinate system begins with projecting an ellipsoidal model of the earth onto a flat plane. Now that we have a flat map we can define locations using Cartesian coordinates with x-axis and y-axis values.

Projection, however, causes distortions in the resulting planar map. These distortions fall into four categories; shape, area, direction, and distance.

Projections that minimize distortions in…
…shape are called conformal projections.
…area are called equal-area projections.
…direction are called true-direction projections.
…distance are called equidistant projections.

The choice of projected coordinate system you choose really boils down to two aspects. The projection should minimalise distortions for your area of interest, but more importantly, if your map requires that a particular spatial property (shape, area, direction, or distance) to be held true, then the projection you choose must preserve that property. It is possible to retain at least one of these properties but not all.

I recently read a book titled “Designing Better Maps” by Cynthia A. Brewer (you would’t know from the maps in this post though) and the following line stood out to me…

“If you see a map of the United States that looks like a rectangular slab, with a straight-line US-Canada border across the west, be suspicious of the mapmaker’s knowledge of map projection and of interpretations of the mapped data.”

This got me thinking about all those maps I see of the United States on a Web Mercator that thematically map data of census tracts or counties of states, or as previously mentioned show buffer zones/distances for visual analysis and/or data analysis purposes. A Mercator is a conformal projection and as such preserves angles (shape as seen by the circles in the figure below) but distorts size and area as you move away from the equator. If focussing on a geographic region as large as the U.S. surely Web Mercator should be avoided at all costs unless the map’s sole purpose is for navigation? A conformal projection should be used for large scale mapping (1:100 000 and larger) centred on the area of interest because at large scales (when using a conformal projection) there are insignificant errors in area and distance.

Tissot's Indicatrix WM

Tissot’s Indicatrix used to display distortions on a Web Mercator

The figure above uses something called the Tissot Indicatrix. Here we have a Web Mercator map, the circles at the equator cover a similar area on the globe as those further north and south of the equator. Hold on, what? Surely those bigger circles towards the poles cover a much larger area on the Earth than those smaller ones at the equator! This is false, but why is this? It is because a Web Mercator is a cylindrical projection system and we will get to this momentarily.

To fit the contiguous United Stated on to an A0 poster you need a scale of around 1:6500000, and 1:27500000 on an A4 page, far from large scale mapping, yet we persist to use the Web Mercator for visualising data for the U.S. on small screens.

UPDATE: the Web Mercator is NON-conformal, please read Roel Nicolai’s comment below and also visit GeoGarage for more information. This post is to make you aware that using the correct projection is paramount for data analysis.

More on Conformal Projections

Conformal projections preserve local shape (and angles) i.e. shape for small areas. Take note that no map projection can preserve shapes for large regions and as such, conformal projections are usually employed for large-scale mapping applications (1:100000 and larger) and rarely used for continental or world maps. Local angles on the sphere are mapped to the same angles in the projection, therefore graticule lines intersect at 90-degree angles. Point to remember: conformity is strictly a local property.

Use a conformal projection when the main purpose of the (large-scale) map involves:
• measuring angles
• measuring local directions accurately
• representing the shapes of features
• representing contour lines

Cylindrical Projection: The Cause for Distortion in a Web Mercator

Cylindrical Projection

A cylindrical projection (above) is like projecting the earth’s surface on the inside of the tubing and then rolling out the tube to be left with a flat rectangle. In a cylindrical projection world maps are always rectangular in shape. Scale is constant along each parallel (longitude) and meridians (latitude) are equally spaced. The rectangular nature results in all parallels having the same length and all meridians having the same length. But since the real Earth curves in toward the polls, in order to get those straight lines, you have to stretch and distort the surface more and more as you get closer to the north and south poles. In fact, is impossible to see the poles because as you approach them, the distance between latitude lines stretches out toward infinity.

Ruining Life for Web Mercator Buffers

Let’s take a look at an example comparing data on a Web Mercator to a better suited projection for the contiguous U.S.

The figure below shows a selection of locations along the east coast of the United States in a Web Mercator projection. A buffer with a radius of 200km has been generated in the Web Mercator projection and applied to each point. We know from the Tissot Indicatrix that circles become enlarged as we move away from the equator but yet the distance of the buffers remains constant as we move from south to north.

Web Mercator Buffers

If we convert the entire map to an equidistant projection such as the USA Contiguous Equidistant Conic projection (EPSG: 102005) we will see that the buffer zones will alter and will enlarge as we move from north to south.

Web Mercator Buffers Reprojected

So this tells us that the 200km buffer generated in the Web Mercator projection around Bar Harbor (the most northerly location on the map) covers far less an area than the same buffer zone generated for Miami Beach (the most southerly location). This makes sense because of the stretched distortion of the land as we move north from the equator caused by the Web Mercator projection. The buffer zone generated in the Web Mercator projection has not allowed for these distortions.

Now let’s generate the 200km buffer zones in the USA Contiguous Equidistant Conic projection, a projection that attempts to preserve distance.

Equidistant Buffers

Similar to the buffer zones created in the Web Mercator each circular zone is the same diameter of 400km. We know that this projection (EPSG: 102005) is designed to preserve distance, so what do you think will happen when we reproject these buffer zones to Web Mercator? Think back to the Tissot Indicatrix figure. That’s right! As we move away from the equator these buffer zones are going to become enlarged as shown in the figure below.

Equidistant Buffers Reprojected

The Equidistant Conic buffer zones in the Web Mercator map above more accurately define a 200km buffer zone around each location than those generated using the Web Mercator projection.

More on Equidistance Projections

Equidistant map projections make the distance from the centre of the projection to any other place on the map uniform in all directions. Take note that no map provides true-to-scale distances for any measurement you might make.

Use an equidistant projection when the main purpose of the map involves similar to; showing distances from the epicentre of an earthquake or other point of location, or mapping the flight routes from one city airport to all destination cities.

How Data Analysis Can Go Wrong

I won’t perform any in-depth analysis but will highlight how performing spatial data analysis using the Web Mercator projection can yield inaccurate results. It is good practice to convert all your data to a common projection when performing geoprocessing and spatial analysis tasks.

Census Tract Counts

The figure above is a count of the census tracts that intersect the 200km buffer zones of each of the two projections, Web Mercator and USA Contiguous Equidistant Conic. It is easy to see that if you are going to be analysing demographic data based on location around a certain point that the two projections will yield contrasting results. In fact, major contrasting results for most locations. Big decisions are often reliant on spatial analysis. Analysing your data in a non-suited projection system can steer these decisions completely off course, future plans may be scrapped based on the Mercator results, and this decision may have been made in error as the Equidistant Conic results could have shown that the project should have proceeded.

Similarly, if you need to preserve the area of features, such as land parcels for analysis and visual display you might consider an equal-area projection like the USA Contiguous Albers Equal Area Conic projection. Equal-area projections are also essential for dot density mapping, and other density mapping such as population density. Equal-area maps can be used to compare land-masses of the world and finally put to bed that Greenland is a lot smaller than South America.

According to Kenneth Field (a.k.a. the Cartonerd)…

“If you’re going to be comparing areas either for city comparison or for thematics you really do need an equal area projection unless all of your cities sit on the same degree of latitude. If not, you’re literally pulling the wool over the eyes of your map readers and they leave with a totally distorted impression of the themes mapped.”

Check out vis4.net for an example of the Albers Equal Area Conic projection. If Area is important to the underlying data being visualised for the United States, then this is one of the projections you should be using to display your data.

Conclusion

“Projections in a web browser are terrible and you should be ashamed of yourself.” – Calvin Metcalf

If you are using a web portal to perform data analysis through spatial analysis or visual analysis techniques, even if the final visualisation is in Web Mercator, at the very least, make sure that the underlying algorithms churning away in the background producing your output are using the appropriate projection to achieve better accuracy. If you are paying a vendor for their services make sure that their applications are providing you with accurate data analysis for better decision making. You will often here a saying that ‘GIS analysis is only as good as the data used for the analysis’, and while this strongly holds true, the best of data can produce misleading results because of a poor projection choice.

With the ability to produce your own map tiles and JavaScript libraries such as D3.js to overlay vector data in the correct map projection, OpenLayers can also handle projections and there is a Proj4 plugin for Leaflet, and also CartoDB, there are little excuses to allow the dictatorship of the Web Mercator to continue.

But Web Mercator isn’t all that bad. Projections are not important when people are only interested in the relative location of features on a map. So if you are simply dropping location markers on a map without the need for analysing the data, go ahead, use the Web Mercator. But if analysis of data is being performed it is a sin to use the Web Mercator.

P.S. I am still a Mercator sinner when it comes to display. I’m working on my penance.

Sources & Data

ESRI – Tissot Indicatrix Data
ESRI – Distances and Web Mercator
Tiger Geodatabases
Natural Earth Data
Cartonerd
Geo-Hunter
GISC – Slippy Maps
Geography 7
vis4.net – no more mercator
Map Time Boston – Mapping with D3
Calvin Metcalf – FOSS4G
CartoDB – Free Your Maps from Web Mercator

Book Review: Designing Better Maps; A Guide for GIS Users

Title: Designing Better Maps; A Guide for GIS Users
Author: Cynthia A. Brewer
Publisher: ESRI Press
Year: 2016
Aimed at: GIS/Map Designers – beginner
Available from: www.wordery.com

Designing Better Maps

“Lightness to enhance hierarchy, hue to enhance qualitative differences.”

This book is the perfect companion for anyone beginning their GIS adventure and can even teach a trick or two to those seasoned professionals. I remember back to my GIS postgraduate course and making maps for the first time. I always thought I had an element of artistic flair from my days in national school art class and I was none the wiser about my map’s lack of, well, everything, after graduating. Okay, I wasn’t placing ridiculously oversized north-arrows or scale bars that were of odd measurements and using psychedelic colouring schemes but I had no idea how far off I was from making publishing-worthy mapping outputs. Luckily my first and second jobs involved mass outputs of paper maps and it was here that I initially learned about many of the aspects of this book such as the placement and size of elements and the use of white space.

One great aspect about this book is that although it is an ESRI Press publication their products are not shoved down your throat. ArcGIS gets a mention every so often but this book is far from a tutorial style walk-through and airs more on the side of theory and practicality. The design processes discussed can be performed in the majority of GIS software and the option of performing edits in graphic design packages are also mentioned quite frequently.

The first chapter emphasises designing a map for it’s intended purpose and audience through the visual hierarchy of data and by planning your layout, balancing empty spaces and the refinement process through experimentation. Choosing the right projection can also play a key role to how the data is displayed and how it visually impacts the final product.

Basemaps are an essential part of the design process and getting the background information right can be the difference between a good map and a great map. Chapter 2 cycles through many types of basemaps and how you can control their impact on the map through raster and vector based background information and how you can keep the important information prominent.

Chapter 3 puts forward the notion of self explanatory maps. Whoever is reading the map should not have to seek the attention of the author to explain what is going on. This is achieved through the legend, wording the title appropriately, utilising supportive text and placing it in context, and making use of mapping elements such as the north-arrow, scale-bar, and graticules and their position in the visual hierarchy.

Maps are everywhere these days, there are interactive webmaps, static webmaps as images on a website, maps embedded in documents such as PDF, maps displayed on TV, and believe it or not maps still get printed in paper format. With all the output and display options available you must make sure that your map can accommodate each medium that it will be presented in. Chapter 4 explores the common pitfalls with exporting especially with handling transparency and finishes up with copyrights and attributing the source of the data to avoid infringement.

The fifth chapter addresses an area that I have had problems with in the past and that is choosing the correct font(s) for the map. This chapter alone made the purchase of the book worthwhile.

From fonts to labelling the book flows naturally and we all know how cumbersome labelling can be when trying to finalise a map. Labels can often rank higher in the visual hierarchy than intended or wanted. Clear labelling helps your audience correctly interpret the mapped data. Size, weight, case, and lightness are just some of the criteria used to communicate differences in importance. With labelling being such a time-intensive part of map creation, knowledge of labelling conventions will improve efficiency in quality map production.

Chapter 7 gets to the nuts and bolts of what I’m sure most of you thought this book would be all about, colours. I have previously sat through a whole university module for digital media where RGB and CMYK colour played a role but this chapter really simplifies it all before progressing to chapter 8 and discussing the prevalent role of colour choices for features on a map.

Colours are intended to make your maps easier to read by ensuring that your map matches the logic of your data. The author discusses several colour schemes, sequential, diverging, qualitative, bivariate (sequential, diverging, qualitative) and then fine tuning the colour selection with custom colour ramps and making maps more accessible for people who are colour-blind.

The final chapter mainly builds upon Chapter 8 but also draws from several other chapters. The main focus of the chapter is customising symbols for ordered data and symbolising categorised data into qualitative classes. Point, line and area symbols are discussed using a variety of symbolising methods such as proportion, graduated, size/width, shape, angle and patterns. The author neatly creates a table for eight visual variables for points, lines and areas giving twenty-four basic ways to vary symbols for representing your mapped data. To build upon Chapter 8 symbols for multivariate, overlaid, bivariate, quantitative and qualitative data are explored relating to sequential and diverging schemes.

The Appendix provides us with information on the ColorBrewer website and there are pages of colour palettes for reference.

Overall Verdict: I wish I had this book in my possession when I first enrolled for a career in GIS. Even after years of producing and plotting hundreds of maps this book has enforced some new and reinforced some old quality controls required to produce the best possible map for the intended audience. It’s an easy read with chapters short but very informative and to the point. I will be keeping it handy as a reference especially for the chapters relating to fonts, labelling, and colours. If you are a seasoned GIS user and feel that your maps are still lacking that final bit of quality this book is a good place to get you over that final hurdle. I am often guilty of rushing a map to final product and this book really enforces basic standards for efficient quality output.

Reproject a Polygon Shapefile using PyShp and PyProj

In this post I will use the PyShp library along with the PyProj library to reproject the local authority boundaries of Ireland, in Shapefile format, from Irish Transverse Mercator to WGS 84 using Python.

ITM to WGS84

To follow along download the admin boundaries from the Central Statistics Office (CSO) and rename the files to Ireland_LA. Move the files to directory that you want to work from.

You will need to install the libraries, you can use easy install or pip by opening up the command prompt window and entering

easy_install pyshp
easy_install pyproj

or

pip install pyshp
pip install pyproj

Open an interactive Python window and enter the following to make sure that you have access to the libraries.

>>> import shapefile
>>> from pyproj import Proj, transform

If no errors are returned you are good to go.

My original attempt at converting the data lead to this monstrosity…

Multipart Disaster

and I instantly realised that several local authority boundaries were made up of multipart geometry.

Multipart Geometry

We would need two constructs, one for rebuilding single geometry features and one for rebuilding multipart geometry features.

So let’s get to it. In your favourite Python IDE open a new script, import the libraries and save it. There is a link at the bottom of the post to download the code.

import shapefile
from pyproj import Proj, transform

Define a function to create a projection (.prj) file. See the post on Generating a Projection (.prj) file using Python for more info.

def getWKT_PRJ (epsg_code):
    import urllib
    wkt = urllib.urlopen("http://spatialreference.org/ref/epsg/{0}/prettywkt/".format(epsg_code))
    remove_spaces = wkt.read().replace(" ","")
    output = remove_spaces.replace("\n", "")
    return output

Define a path to your working directory where the Ireland_LA files reside. You can create a similar path to mine below or define your own, just make sure the Shapefile is located there.

shp_folder = "C:/blog/pyproj/shp/"

Using PyShp create a Reader object to access the data from the Ireland_LA Shapefile.

shpf = shapefile.Reader(shp_folder + "Ireland_LA.shp")

Create a Writer object to write data to as a new Shapefile.

wgs_shp = shapefile.Writer(shapefile.POLYGON)

Set variables for access to the field information of both the original and new Shapefile.

fields = shpf.fields
wgs_fields = wgs_shp.fields

We will grab all the field info from the original and copy it into the new. The ‘Deletion Flag’ as set in the Shapefile standard will be passed over (the tuple in the if statement), and we want data from the lists that follow the tuple that define the field name, data type and field length. Basically we are simply replicating the field structure from the original into the new.

for name in fields:
    if type(name) == "tuple":
        continue
    else:
        args = name
        wgs_shp.field(*args)

Copy Field Template

Now we want to populate the fields with attribute information. Create a variable to access the records of the original file.

records = shpf.records()

Copy the records from the original into the new.

for row in records:
    args = row
    wgs_shp.record(*args)

In the above snippet the args variable holds each record as a list and then unpacks that list as arguments in wgs_shp.record(attr_1, attr_2, attr_3….), which creates a record in the dbf file.

We now have all the attribute data copied over. Let’s begin the quest to convert the data from ITM to WGS84! Define the input projection (the projection of the original file), and an output projection using PyProj..

input_projection = Proj(init="epsg:29902")
output_projection = Proj(init="epsg:4326")

We need to access the geometry of the features in the original file so give yourself access to it.

geom = shpf.shapes()

Now we loop through each feature in the original dataset, access every point that makes up the geometry, convert the coordinates for each point and re-assemble transformed geometry in the new Shapefile. The if statement will handle geometry with only one part making up the feature.

for feature in geom:
    # if there is only one part
    if len(feature.parts) == 1:
        # create empty list to store all the coordinates
        poly_list = []
        # get each coord that makes up the polygon
       for coords in feature.points:
           x, y = coords[0], coords[1]
           # tranform the coord
           new_x, new_y = transform(input_projection, output_projection, x, y)
           # put the coord into a list structure
           poly_coord = [float(new_x), float(new_y)]
           # append the coords to the polygon list
           poly_list.append(poly_coord)
       # add the geometry to the shapefile.
       wgs_shp.poly(parts=[poly_list])

The else statement handles geometries with multi-parts.

    else:
        # append the total amount of points to the end of the parts list
        feature.parts.append(len(feature.points))
        # enpty list to store all the parts that make up the complete feature
        poly_list = []
        # keep track of the part being added
        parts_counter = 0

        # while the parts_counter is less than the amount of parts
        while parts_counter < len(feature.parts) - 1:
            # keep track of the amount of points added to the feature
            coord_count = feature.parts[parts_counter]
            # number of points in each part
            no_of_points = abs(feature.parts[parts_counter] - feature.parts[parts_counter + 1])
            # create list to hold individual parts - these get added to poly_list[]
            part_list = []
            # cut off point for each part
            end_point = coord_count + no_of_points

            # loop through each part
            while coord_count < end_point:
                for coords in feature.points[coord_count:end_point]:
                    x, y = coords[0], coords[1]
                    # tranform the coord
                    new_x, new_y = transform(input_projection, output_projection, x, y)
                    # put the coord into a list structure
                    poly_coord = [float(new_x), float(new_y)]
                    # append the coords to the part list
                    part_list.append(poly_coord)
                    coord_count = coord_count + 1
        # append the part to the poly_list
        poly_list.append(part_list)
        parts_counter = parts_counter + 1
    # add the geometry to to new file
    wgs_shp.poly(parts=poly_list)

Save the Shapefile

wgs_shp.save(shp_folder + "Ireland_LA_wgs.shp")

And generate the projection file for it.

prj = open(shp_folder + "Ireland_LA_wgs.prj", "w")
epsg = getWKT_PRJ("4326")
prj.write(epsg)
prj.close()

Save and run the file. Open the Shapefile in a GIS to inspect. Have a look at the attribute table, nicely populated with the data. You should be able to configure the code for other polygon files, just change the original input Shapefile, set the projections (input and output), and save a new Shapefile. Also don’t forget the projection file!

You can download the source code for this post here. Right-click on the download link on the page, select Save file as…, before saving change the .txt in the filename to .py and save.

If anyone sees a way to make the code more efficient please comment, your feedback is appreciated.

Resources

For more information on the PyShp library visit documentation here and for PyProj.

Recommended Further Reading

Learning Geospatial Analysis with Python. Visit http://geospatialpython.com/ to get a 50% discount code. But hurry, the deal ends Jan 31, 2016.

CSV to Shapefile with PyShp
Generate a Projection (.prj) file using Python