OpenStreetMap User's Diaries
ссылки
- tools.geofabrik.de/
- hdyc.neis-one.org/
- osmose.openstreetmap.fr/
- tools.geofabrik.de/
- hdyc.neis-one.org/
- osmose.openstreetmap.fr/
OpenStreetMap BlogsContinue readi
12/02/2026-18/02/2026

[1] | DER SPIEGEL has built its own open-source mapping stack based on MapLibre and Protomaps | © MapLibre – Protomaps – map data © OpenStreetMap Contributors.
bicycle_parking=absent. This tag aims to document that no bicycle parking is available around a feature, for example a shop or station, making such gaps in infrastructure discoverable in data analyses. Related discussion is also taking place on the forum.geo/osm on Reddit, offering fast parsing of OSM PBF files through handwritten protobuf decoding, optimised readVarint and readSint routines, and custom zlib decompression. The library can skip specific object types, generate file statistics, and extract geometries by region filter, making it suitable for building custom renderers.Note:
If you like to see your event here, please put it into the OSM calendar. Only data which is there, will appear in weeklyOSM.
This weeklyOSM was produced by MarcoR, MatthiasMatthias, PierZen, Raquel IVIDES DATA, Strubbl, Andrew Davidson, barefootstache, derFred, mcliquid.
We welcome link suggestions for the next issue via this form and look forward to your contributions.
包含建筑物、森林。之前陆陆续续修复了一些,不过都是游击式地修复,没有系统地记录过。现在有时间捡起这件事了,先在这里留个坑吧。
二编:?给房子加layer标签来逃避冲突检查器的检查??这是人类啊?
包含建筑物、森林。之前陆陆续续修复了一些,不过都是游击式地修复,没有系统地记录过。现在有时间捡起这件事了,先在这里留个坑吧。
二编:?给房子加layer标签来逃避冲突检查器的检查??这是人类啊?
Taking a break for 1 week because of ramadhan and installing gentoo as my main system.
Taking a break for 1 week because of ramadhan and installing gentoo as my main system.
I have a large set of photographs I made while running. They are geotagged, as I took them with my phone camera. The compass direction is completely unreliable, but lat/lon is more trustworthy. I thought it would be an interesting experiment to extract greenery like grass and trees from these photographs. It can be a useful addition for creating routes that are more pleasant to walk, since the e
I have a large set of photographs I made while running. They are geotagged, as I took them with my phone camera. The compass direction is completely unreliable, but lat/lon is more trustworthy. I thought it would be an interesting experiment to extract greenery like grass and trees from these photographs. It can be a useful addition for creating routes that are more pleasant to walk, since the eye-level point of view is not available in OSM. As this is based on my personal photographs, it has the additional benefit of recommending routes that I tend to use. The first challenge I encountered is that out of a few thousand photographs, only a handful were taken during the daytime. After deduplicating and dropping all photos that contain no greenery, this becomes a relatively small set of waypoints. I decided not to extrapolate additional points along OSM ways to keep the dataset small and avoid adding misleading info. The greenery detection works well enough with the SegFormer model, although it is somewhat slow locally. My plan is to select waypoints from this dataset before calling OSRM. This way I get routes that are more enjoyable to walk and run, but are generally longer than the default shortest route. You can find my dataset on Kaggle.
A few quick notes on some changes I made to OSM based on local knowledge.
Changed the point for the Riverside Centre building to reflect that it is now a Builder’s Corner hardware store.
Added a point for the nearby Hole in the Wall Centre
Defined an area for the Somerset Lofts apartment complex and added some details fo
A few quick notes on some changes I made to OSM based on local knowledge.
Changed the point for the Riverside Centre building to reflect that it is now a Builder’s Corner hardware store.
Added a point for the nearby Hole in the Wall Centre
Defined an area for the Somerset Lofts apartment complex and added some details for it.
I’ve recently begun contributing street-level imagery on Mapillary and Panoramax in my local area. I figured that my dash cam was already recording anyway, so if it could be of use to anyone, why not share it?
Contributing to Mapillary was very easy; since my dash cam has an integrated GPS that encoded its data into the video file, I could just upload the video to Mapillary and their web
I’ve recently begun contributing street-level imagery on Mapillary and Panoramax in my local area. I figured that my dash cam was already recording anyway, so if it could be of use to anyone, why not share it?
Contributing to Mapillary was very easy; since my dash cam has an integrated GPS that encoded its data into the video file, I could just upload the video to Mapillary and their website would turn it into an image sequence. Panoramax requires you to preprocess the video into geotagged images yourself, which made it hard to contribute to. Some cameras can be configured to save periodic images instead of videos, but that didn’t work for me because I still needed the dash cam to work normally as a dash cam first and Panoramax instrument second. It took me a while to figure it out, so I’m writing this blog post to hopefully help out the next guy in the same situation.
The task involves four basic steps. I scripted a solution that works specifically for my dash cam model (Garmin 47) and operating system (Linux). If Panoramax continues to grow, I imagine that separate scripts could be written for each step to mix and match for different camera types and computing environments. The steps are:
Extract the raw GPS data from the dash cam video clip(s)
Along the GPS trace, create a set of evenly-spaced points
Extract images from the video occurring at the evenly-spaced points, and
Add the GPS and time data to the image files
One could go even further and automatically upload the images to Panoramax straight from the terminal, but that’s beyond my coding abilities.
Let’s take a look at each step in detail:
Thankfully, Garmin makes this relatively easy to do with exiftool. If you open the terminal in the directory with the video clips and run the command
exiftool GRMN<number>.MP4
The output will contain a warning:
Warning : [minor] The ExtractEmbedded option may find more tags in the media data
So we can modify the command into
exiftool -ee3 GRMN<number>.MP4
Now exiftool will output all the same information as before, as well as a bunch of the following
Sample Time : 0:00:58
Sample Duration : 1.00 s
GPS Latitude : XX deg YY' ZZ.ZZ" N
GPS Longitude : UU deg VV' WW.WW" W
GPS Speed : 11.2654
GPS Date/Time : 2026:02:13 22:24:45.000Z
Jackpot! Now we can redirect the output to a file and get our GPS coordinates. We need to have a file saved in the working directory to tell exiftool how to format the data. So I saved the following as gps_format.fmt:
#[IF] $gpslatitude $gpslongitude
#[BODY]$gpslatitude#,$gpslongitude#,${gpsdatetime#;DateFmt("%Y-%m-%dT%H:%M:%S%f")}
Now we pass that to exiftool to only print the metadata we’re interested in. We’ll also put > gps.tmp to save the output to a file:
exiftool -p gps_format.fmt -ee3 GRMN<number>.MP4 > gps.tmp
And we’re done! Now we have the raw GPS information out of the video and into plain text.
To do this, I use python to linearly interpolate between GPS points approximately 3 meters apart. And I do mean very approximately: instead of doing a proper distance calculation, I just eyeball how many meters are in a degree. One meter is very roughly about 0.000009° of latitude. Since one meter is a larger portion of a degree near the poles, it needs to be adjusted based on the latitude. I blindly use the latitude of the first point of the sequence and assume it doesn’t change enough over time to matter.
from math import cos, radians
cosd = lambda x: cos(radians(x))
scale_lat = 1 / 9e-6
scale_lon = (1 / 9e-6) * cosd(lat0)
Now it is easy to use the Pythagorean Theorem to estimate the distance between two points:
dx = scale_lon * (lon1 - lon0)
dy = scale_lat * (lat1 - lat0)
dist_between_points = (dx**2 + dy**2)**0.5
Recursively find this distance for each pair of points along the GPS trace. Also keep a running tally of the total distance traveled. For example, consider the following data after you stop at a red light, sit for a while, and then keep going:
Pt | Dist | Tot
A | -- | 0
B | 10 | 10
C | 6 | 16
D | 2 | 18
E | 0 | 18
(sit at the red light...)
Q | 0 | 18
R | 1 | 19
S | 3 | 22
T | 7 | 29
U | 11 | 40
V | 14 | 54
(and so on)
Suppose you want image spacing of about 3 meters (about 10 feet or half a car length). So you want images at 0, 3, 6, 9, 12, 15, …, and so on. We can take point A as our first point, but we need to interpolate between GPS points to find evenly-spaced points. I’ll use the notation X -> Y N% to mean “interpolate N% from X to Y.” Then to find our desired points, we need:
Pt | Formula
0 | A
3 | A -> B 30%
6 | A -> B 60%
9 | A -> B 90%
12 | B -> C 33%
15 | B -> C 83%
18 | D
21 | R -> S 67%
24 | S -> T 29%
27 | S -> T 71%
30 | T -> U 9%
etc...
Since Garmin takes GPS measurements once per second, this is a convenient way to determine at exactly what time each new point occurred. For the point 60% from A to B, it’s just the GPS timestamp of A plus 0.60 seconds. For the latitude and longitude of the interpolated point, we can just interpolate the latitude and longitude coordinates separately. 3 meters is not even close to far enough for great-circle paths to matter. So e.g.
lerp = lambda a, b, x: (1 - x) * a + x * b
lat_interp = lerp(latA, latB, 0.6)
lon_interp = lerp(lonA, lonB, 0.6)
# And so on for each interpolated point
Save this output to a file (I call mine processed_points.csv), and you’re done with step 2!
It is possible to extract a single frame of a video using ffmpeg. The time should be a decimal number of seconds after the start of the video to exactly three decimal places.
ffmpeg -ss <time> -i <video>.MP4 -frames:v 1 output.jpg
By default, ffmpeg compresses the images quite a bit. It was enough that I could notice a quality difference when I put a paused frame of the video side-by-side with an extracted image. We can force ffmpeg to improve the quality with q:v (number). A smaller number here produces a higher quality image at the expense of file size and processing time. I’ve settled on a value of 3, but feel free to play around with this to get the quality or file sizes you want.
ffmpeg -ss <time> -i <video>.MP4 -q:v 3 -frames:v 1 output.jpg
ffmpeg will print a bunch of text to the console that we don’t care about. To avoid flooding the screen, use the -hide_banner and -loglevel options to reduce (but not completely shut up) the amount it outputs to the console:
ffmpeg -ss <time> -i <video>.MP4 -q:v 3 -frames:v 1 -hide_banner -loglevel fatal output.jpg
Since you are going to extract many images, you’ll have to use this command in a loop with a bunch of variables that change from iteration to iteration, e.g.
ffmpeg -ss $(printf "%.3f" "$time") -i "$input_dir""/DCIM/105UNSVD/GRMN""$num"".MP4" -q:v "$jpeg_quality" -frames:v 1 -hide_banner -loglevel fatal "$output_dir"/"$num""-""$(printf "%04d" $img_num)"".jpg"
My naming convention produces file names of the format video number-image number.jpg. So for example, the 25th image extracted from GRMN4567.MP4 would be named 4567-0025.jpg.
And we’re almost there! Now we just need to put the metadata from step 2 into the images we just generated.
You can write tags to files using exiftool using the format:
exiftool -<key>=<value> <file name>.jpg
You can add multiple tags in a single line.
exiftool -<key1>=<value1> -<key2>=<value2> <file name>.jpg
Note that exiftool only supports specific keys, so it won’t write the metadata if it doesn’t know what the key is. It creates a new image by default, so to avoid duplicating each image, add:
exiftool -overwrite_original -<key1>=<value1> -<key2>=<value2> <file name>.jpg
This will write a line to the terminal to confirm after every single image. To avoid that, redirect the output to /dev/null. This tells the terminal to throw the output into a black hole, or the wardrobe to Narnia, or anywhere else besides the terminal.
exiftool -overwrite_original -<key1>=<value1> -<key2>=<value2> <file name>.jpg 2> /dev/null
For Panoramax to accept your images, you need all of the following tags:
-gpslatitude=45.6789
-gpslongitude=-123.456789
-gpslatituderef=N
-gpslongituderef=W
-datetimeoriginal=2000-01-02T03:04:05
If you are missing these, Panoramax will reject your image. Note that the latitude and longitude ref tags are necessary because exiftool doesn’t understand negative coordinates as being in the southern or western hemispheres. You have to provide them separately for the GPS data to be read correctly. If you forget to add them, Panoramax may accept the image but put it in the wrong place. The date and time should be given in ISO 8601 format. If you don’t specify a time zone, Panoramax will assume local time and automatically convert it to UTC on their site.
You can theoretically add any tag in the exif specification. Some ones I like for Panoramax are:
-subsectimeoriginal=067
-author=FeetAndInches
-make=Garmin
-model="Garmin 47 Dash Cam"
The SubSecTimeOriginal field is important for getting Panoramax to put your sequence in the right order. Since the images come from a dash cam, speeds of 10-20 m/s are common, so multiple images are taken per second of video. The DateTimeOriginal tag does not preserve fractional seconds (even if you provide them when writing the tag), so several pictures would be recorded as the same time and Panoramax would have to guess their order. Note that this needs to be provided as an integer string after the decimal point. So for a time of 51.328 seconds, you would write -subsectimeoriginal=328. For a time of 51.1 seconds, you would just write -subsectimeoriginal=1. For a time of 51.001 seconds, you would need to include leading zeroes as -subsectimeoriginal=001.
If you don’t use the SubSecTimeOriginal tag, you can still get Panoramax to show your images in order if you use a suitable file naming convention. You can open the sequence on the website and select the option to sort by file name.
The author tag is just nice to attribute that it’s your image even if it gets shared outside Panoramax. The make and model tags help fill in some of the camera information on Panoramax and helps determine your GPS accuracy, which is used to determine the image’s quality score.
You can do step 4 in the same loop as step 3. Since the coordinates and time will change for each image, the command will look messy like:
exiftool -overwrite_original -gpslongitude=$lon -gpslatitude=$lat -gpslatituderef=$ns -gpslongituderef=$ew -datetimeoriginal=$timestamp -author="$exif_author" -subsectimeoriginal="$subsec" -make="$exif_make" -model="$exif_model" -usercomment="$exif_comment" "$output_dir"/"$num""-""$(printf "%04d" $img_num)"".jpg" > /dev/null
This post explains the basic principles of how to turn a video into usable images on Panoramax. I plan to write a second post going into the 201 level - things like how to deal with missing a single GPS measurement, duplicated measurements, getting sent to Null Island, how to detect erroneous data, using the videos immediately before and after to interpolate better at the edges, recursively doing this for multiple video clips, etc. But for now, I hope this has been useful to you.
If anyone is interested, I can share the entire scripts that I use right now. They’re a little buggy, only partially commented, and occasionally require some babysitting to make sure they work properly. But if something is better than nothing and you are willing to try and deal with someone else’s amateur code, please let me know.
Thanks for reading,
FeetAndInches
I spent some time today improving the map data in my local area using the iD editor. As a local, I noticed that several roads were untracted
added roads but i got confused while selecting presets- then i realised the more i do mapping, the better i will get with using presets. Each preset serves a unique purpose.
Few weeks ago i
I spent some time today improving the map data in my local area using the iD editor. As a local, I noticed that several roads were untracted
added roads but i got confused while selecting presets- then i realised the more i do mapping, the better i will get with using presets. Each preset serves a unique purpose.
Few weeks ago i spent time mapping my school in my city, i was soo fun- just wish they could use more updated satelite image.
Two years ago or so I started the OSM XRAY project, later I wrote about it in this blog post. Since then I have renamed this project to “OSM Spyglass” and I have kept working on it on and off.
At the State of the Map Europe 2025 in Dundee I gave a talk with the title “Everything Everywhere All At Once” about this project. You can see the video on Youtube. This got
Two years ago or so I started the OSM XRAY project, later I wrote about it in this blog post. Since then I have renamed this project to “OSM Spyglass” and I have kept working on it on and off.
At the State of the Map Europe 2025 in Dundee I gave a talk with the title “Everything Everywhere All At Once” about this project. You can see the video on Youtube. This got some people excited about the project, there is even some talk about putting the tool on OSMF infrastructure. Until this comes about the tool is now hosted at spyglass.jochentopf.com.
I am finally getting around to writing some more about what’s been happening since my first announcement and since the talk.
I keep fiddling with the user interface. Optional globe view (not much to do for me now that Maplibre supports that out of the box), map is now resizable (horizontally), display of city names in some zoom levels, improved pop-up menus for keys and tags, and much more. Generally the UI has been getting faster and more reliable.
There are still some bugs to fix and plenty of possible improvements. And I’d be happy about feedback and ideas. Its quite a lot of information we are trying to show here in limited space, so good ideas on how to do that are needed.
In the first blog post I wrote about some caching that I implemented in the database. That did work but it turns out it is pretty useless. The user wants to access the newest data anway and we can keep up with minutely updates (at least in larger zoom levels), so I removed the caching completely for vector tiles and for high zoom rasters. Only raster images at zoom levels up to 10 are cached. Currently we can not deliver them fast enough otherwise.
The database is updated from OSM using minutely diffs. We are usually about 3 to 5 minutes behind the OSM data, that’s just how long it takes the OSM servers to create the minutely diffs, push them out to their server and for our update job to download the data and to apply it to the database. It is unlikely we can improve on that much further. Spyglass shows the timestamp of the latest data it has in the bottom right corner. This timestamp is updated whenever new data is loaded, i.e. when you move the map or so.
Vector tiles are always generated on the fly from the current database, for higher zoom levels they contain all data, for medium zoom levels only “larger” objects are shown, i.e. long ways and larger areas. In small and medium zoom levels raster tiles are shown. They always contain all data. So for the medium zoom levels raster data in gray is overlayed with vector data in black (nodes and ways) or blue (relations). So you can see everything, but only click on the larger items.
Raster tiles in small zoom levels are only updated once per day, for zoom 0 to 7 this happens by taking the zoom level 8 tiles, and merging and rescaling them. I have spent quite some time on optimizing this. The first version happened in the database but only generated black-and-white tiles, the current version uses code written in Go which creates grayscale images which are much better than the black-and-white images. And it is much faster than the gdal tools I tried for this task. Gdal is a great tool, but, as an “all purpose tool”, it has to cope with all sorts of different data sources, projections etc. which makes it much slower than a specialized tool for a specific use case. It only takes a few minutes now to create the low zoom tiles from the zoom level 8 tiles. And they are not stored in the database any more but on disk which is easier and they are faster to use that way, too.
Rasters are still generated in the database from the data. That is, unfortunately, not as efficient as one might think. We don’t need to copy the data from the database into another process, and the cost of actually getting the data seems to be not that huge, but the rasterizing costs time. This is probably something that could be improved inside PostGIS, or maybe we have to get rid of this idea alltogether and move rendering outside the database. There is plenty of space to experiment and improve performance here.
Originally I used pg_tileserv as server to create the vector tiles from the database on the fly. It could also be tricked into creating the raster tiles. But I also needed GeoJSON output and some other API endpoints. I experimented with pg_featureserv which did work, but having two servers with lots of specialized PL/pgSQL functions in the database plus an ever growing configuration for nginx (used as reverse proxy) became too complicated and error prone. So I decided to rewrite the server from scratch in Go. Turns out it is really easy to write robust and featureful HTTP servers in Go, it comes with everything you need; the only external library I am using is for accessing the database. And deployment is really easy: Just copy over one Go binary and restart the server, no extra configuration files or functions to update in the database etc.
Everything is done three times for nodes, ways, and relations. There are three sets of raster tiles, 3 sets of vector tiles. It is easy to switch those layers on and off in the UI. And then there is the key or tag filter. The vector tiles in higher zoom levels contain all the data, the filter is applied on the client, which is very fast. For raster tiles the filtering has to be done on the server which takes somewhat more time. Filtering is (silently) disabled on the small zoom levels, so you always see all data there. This isn’t great as a user experience, I’ll still have to figure out a way to make this transition more user friendly. Or, ideally, allow filtering on all zoom levels.
It is a lot of fun to zip around the map and look at far away places and how they are mapped. Try it out!. And if you have any problems or ideas, open an issue on Codeberg.
There has been a very interesting question on the OSM US Slack lately.
“Does anyone have a method to search through the OSM database for a building of a particular shape? I need assistance finding OSM buildings with this specific shape. They should be located in NJ, DE, northeastern MD, eastern PA, or southern NY.”
The question quickly exploded into a
There has been a very interesting question on the OSM US Slack lately.
“Does anyone have a method to search through the OSM database for a building of a particular shape? I need assistance finding OSM buildings with this specific shape. They should be located in NJ, DE, northeastern MD, eastern PA, or southern NY.”
The question quickly exploded into a huge discussion. At the time of writing, there are already 71 replies.
Someone suggested :
“You could load OSM buildings into PostGIS and then use ST_HausdorffDistance to compare the geometries.”
From there, the discussion veered into how to solve that specific puzzle and find the exact OSM building in question.
One person added, “So the strategy is: create the shape of the building you want to search for, scale it to, say, fill a 100x100 m bounding box or something. Ask Postgres to, within a search-area bounding box, take each building and scale it to a 100x100 m bounding box, compute the Hausdorff distance with the scaled input shape, and return all OSM element IDs and their Hausdorff distances, sorted in ascending order.”
Another said, “What I’m currently doing is combining several shape exports into a single file with around 20,000 objects that have concavity. Concavity plus more than 10 nodes eliminates most buildings.”
At that point, instead of hunting that elusive specific OSM building, I became more interested in the generalized version of the problem.
So I added my two cents to the discussion:
“The generalized version of this problem would be : Can we represent a shape in some kind of data type that allows us to computationally check whether two objects have the same shape, regardless of rotation and scaling?
I haven’t studied the Hausdorff distance yet, but I’m wondering whether it can solve this problem, or if there’s a better alternative—Hu moments, Procrustes analysis, Fourier descriptors for contours…”
Someone replied :
“Hu moments are a good option. Elliptic Fourier Descriptors, Shape Context Histograms, Turning functions, etc. I’ve experimented with those four while trying to classify sports pitches more accurately. You can actually get pretty far with just compactness, convexity, and aspect ratio, thankfully.”
Do you have any other ideas on how to solve this problem?
New CNEFE Tool Revolutionizes Street Name Correction in OpenStreetMap Brazil
The community of Brazilian mappers has just gained a powerful ally to improve one of the most crucial and, at the same time, challenging data points in any map: street names. The CNEFE Verification System platform has been launched, accessible at cnefe.mapaslivre.com.br, a tool created by and for the OpenStreetM
New CNEFE Tool Revolutionizes Street Name Correction in OpenStreetMap Brazil
The community of Brazilian mappers has just gained a powerful ally to improve one of the most crucial and, at the same time, challenging data points in any map: street names. The CNEFE Verification System platform has been launched, accessible at https://cnefe.mapaslivre.com.br, a tool created by and for the OpenStreetMap (OSM) community in Brazil, aimed at validating and correcting address data using the latest information from the 2022 IBGE Census.
The project is an initiative of UMBRAOSM (Union of Brazilian OpenStreetMap Mappers) and was developed by experienced mappers Raphael de Assis, president of UMBRAOSM and member of the OpenStreetMap Foundation, and Anderson Toniazo, both active members of the OSM Brazil community. The tool arrives to solve a long-standing bottleneck in national mapping: the updating and verification of street names based on official sources. The Challenge of Street Names in Brazil
For those mapping in Brazil, one of the biggest challenges has always been the lack of a complete, accurate, and freely accessible street database. Through the Demographic Census, IBGE compiles the National Registry of Addresses for Statistical Purposes (CNEFE) . This registry is a vast list of addresses from across the country, containing street names, address types, neighborhoods, and, in many cases, geographic coordinates, especially in rural and non-residential areas.
Historically, the OSM community has used CNEFE data from previous censuses (such as 2010) to enrich the map. However, the process was complex, involving downloading text files (fixed format), cross-referencing them with census tract shapefiles, and extensive manual work to match the information with the streets already drawn on the map, in addition to correcting spelling differences.
With the recent publication of the CNEFE 2022 microdata by IBGE, the need for an efficient tool to integrate this new data into OSM became even more evident. CNEFE System: A Bridge Between Official Data and the Collaborative Map
It is in this context that the CNEFE Verification System emerges. The platform created by Raphael de Assis and Anderson Toniazo is not just a data viewer; it is a complete work tool, designed to optimize the collaborative verification and correction workflow.
The system’s intuitive interface allows mappers of all experience levels to:
Visualize CNEFE 2022 Data: The tool presents official address data from the most recent census clearly, overlaid on the map.
Compare with OpenStreetMap: The mapper can easily identify discrepancies between a street name recorded in CNEFE and the name currently present in OSM.
Correct and Include Names: When a street in OSM is unnamed (very common in less mapped areas) or has a different name than the IBGE registry, the tool facilitates the correction and inclusion of the correct name directly on the map.
Fill Gaps: In places where IBGE registered addresses, but the corresponding streets have not yet been drawn in OSM, the application highlights these areas, encouraging the complete mapping of road geometries and, subsequently, the addition of names.
The platform is already at version 1.0, updated on January 22, 2026, and features rich support material for the community. Mappers can access a step-by-step tutorial with images, watch demonstrative videos, and even download complete PDF tutorials for offline consultation, ensuring everyone can make the most of the tool. The Strength of the Community Behind the Tool
The development of the CNEFE System is a testament to the power and organization of the OSM Brazil community. UMBRAOSM, under the leadership of Raphael de Assis, has stood out for promoting initiatives that facilitate and professionalize collaborative mapping in the country. Projects like “Mapeia Crato” have already demonstrated the capacity of unity in training new mappers and carrying out large-scale tasks.
The partnership between Raphael and Anderson in developing this tool reinforces the community’s commitment to not only use open data but also to give back, creating ecosystems that improve the quality of geospatial information available to everyone. Their work directly aligns with broader discussions within the community, such as the matching of CNEFE 2022 variables with OSM tags, a fundamental step for any data import or validation process. A Future with More Accurate Maps
The availability of the CNEFE System marks a significant advance for Brazilian mapping. By facilitating access and comparison with official Census 2022 data, the tool not only speeds up the map update process but also increases the reliability of the OpenStreetMap database as a whole.
For the end-user, whether a driver using a navigation app, a delivery person, or a researcher, the result is more accurate maps, with correctly identified streets and addresses that are easier to locate. The CNEFE tool is, therefore, a key piece in Brazil’s open data infrastructure, built collaboratively by those who understand the subject best: the mapping community itself.
A comunidade de mapeadores brasileiros acaba de ganhar uma poderosa aliada para aprimorar um dos dados mais cruciais e, ao mesmo tempo, desafiadores de qualquer mapa: os nomes das ruas. Foi lançada a plataforma Sistema de Verificação CNEFE, acessível em cnefe.mapaslivre.com.br, uma ferramenta criada por e para
A comunidade de mapeadores brasileiros acaba de ganhar uma poderosa aliada para aprimorar um dos dados mais cruciais e, ao mesmo tempo, desafiadores de qualquer mapa: os nomes das ruas. Foi lançada a plataforma Sistema de Verificação CNEFE, acessível em https://cnefe.mapaslivre.com.br, uma ferramenta criada por e para a comunidade OpenStreetMap (OSM) no Brasil, com o objetivo de validar e corrigir os dados de logradouros utilizando as informações mais recentes do Censo 2022 do IBGE.
O projeto é uma iniciativa da UMBRAOSM (União dos Mapeadores Brasileiros do OpenStreetMap) e foi desenvolvido pelos experientes mapeadores Raphael de Assis, presidente da UMBRAOSM e membro da Fundação OpenStreetMap, e Anderson Toniazo, ambos membros ativos da comunidade OSM Brasil. A ferramenta chega para resolver um antigo gargalo no mapeamento nacional: a atualização e verificação dos nomes das ruas a partir de fontes oficiais . O Desafio dos Nomes de Ruas no Brasil
Para quem mapeia no Brasil, um dos grandes desafios sempre foi a falta de uma base de dados de logradouros completa, precisa e de livre acesso. O IBGE, através do Censo Demográfico, coleta o Cadastro Nacional de Endereços para Fins Estatísticos (CNEFE). Este cadastro é uma vasta lista de endereços de todo o país, contendo nomes de ruas, tipos de logradouro, bairros e, em muitos casos, coordenadas geográficas, especialmente em áreas rurais e não residenciais .
Historicamente, a comunidade OSM já utilizava dados do CNEFE de censos anteriores (como o de 2010) para enriquecer o mapa. No entanto, o processo era complexo, envolvendo o download de arquivos de texto (formato fixo), o cruzamento com shapefiles de setores censitários e um trabalho manual intenso para casar as informações com as ruas já desenhadas no mapa, além de corrigir diferenças de grafia .
Com a recente publicação dos microdados do CNEFE 2022 pelo IBGE, a necessidade de uma ferramenta eficiente para integrar esses novos dados ao OSM tornou-se ainda mais evidente . Sistema CNEFE: Uma Ponte entre o Dado Oficial e o Mapa Colaborativo
É nesse contexto que surge o Sistema de Verificação CNEFE. A plataforma criada por Raphael de Assis e Anderson Toniazo não é apenas um visualizador de dados; é uma ferramenta de trabalho completa, projetada para otimizar o fluxo de verificação e correção colaborativa.
A interface intuitiva do sistema permite que mapeadores de todos os níveis de experiência possam:
Visualizar os Dados do CNEFE 2022: A ferramenta apresenta os dados oficiais de logradouros do censo mais recente de forma clara e sobreposta ao mapa.
Comparar com o OpenStreetMap: O mapeador pode facilmente identificar discrepâncias entre o nome de uma rua registrado no CNEFE e o nome atualmente presente no OSM.
Corrigir e Incluir Nomes: Quando uma rua no OSM está sem nome (algo muito comum em áreas menos mapeadas) ou com um nome diferente do cadastro do IBGE, a ferramenta facilita a correção e a inclusão do nome correto diretamente no mapa .
Preencher Lacunas: Em locais onde o IBGE registrou endereços, mas as ruas correspondentes ainda não foram desenhadas no OSM, a aplicação sinaliza essas áreas, incentivando o mapeamento completo da geometria das vias e, posteriormente, a adição dos nomes.
A plataforma já está na versão 1.0, atualizada em 22 de janeiro de 2026, e conta com um rico material de suporte para a comunidade. Os mapeadores podem acessar um tutorial passo a passo com imagens, assistir a vídeos demonstrativos e até baixar tutoriais completos em PDF para consulta offline, garantindo que todos possam aproveitar a ferramenta ao máximo. A Força da Comunidade por Trás da Ferramenta
O desenvolvimento do Sistema CNEFE é um testemunho do poder e da organização da comunidade OSM Brasil. A UMBRAOSM, sob a liderança de Raphael de Assis, tem se destacado por promover iniciativas que facilitam e profissionalizam o mapeamento colaborativo no país. Projetos como o “Mapeia Crato” já demonstraram a capacidade da união em capacitar novos mapeadores e realizar tarefas de grande escala .
A parceria entre Raphael e Anderson no desenvolvimento desta ferramenta reforça o compromisso da comunidade em não apenas usar os dados abertos, mas também em retribuir, criando ecossistemas que melhoram a qualidade da informação geoespacial disponível para todos. O trabalho deles dialoga diretamente com discussões mais amplas na comunidade, como a correspondência das variáveis do CNEFE 2022 com as etiquetas do OSM, um passo fundamental para qualquer processo de importação ou validação de dados . #Um Futuro com Mapas Mais Precisos
A disponibilização do Sistema CNEFE marca um avanço significativo para o mapeamento brasileiro. Ao facilitar o acesso e a comparação com os dados oficiais do Censo 2022, a ferramenta não só acelera o processo de atualização do mapa, mas também aumenta a confiabilidade da base de dados do OpenStreetMap como um todo.
Para o usuário final, seja ele um motorista usando um aplicativo de navegação, um entregador ou um pesquisador, o resultado são mapas mais precisos, com ruas corretamente identificadas e endereços mais fáceis de localizar. A ferramenta do CNEFE é, portanto, uma peça chave na infraestrutura de dados abertos do Brasil, construída colaborativamente por quem mais entende do assunto: a própria comunidade de mapeadores.
Today I contributed to OpenStreetMap by improving map completeness in my local area in Bengaluru, Karnataka.
🔹 What I Worked On
Added a missing café using local knowledge Verified placement to ensure it was mapped at the correct entrance location Added appropriate tags including: amenity=cafe name= ##Bean Stop Café
Checked for duplicate entries befor
Today I contributed to OpenStreetMap by improving map completeness in my local area in Bengaluru, Karnataka.
🔹 What I Worked On
Added a missing café using local knowledge Verified placement to ensure it was mapped at the correct entrance location Added appropriate tags including: amenity=cafe name= ##Bean Stop Café
Checked for duplicate entries before uploading
🔹 Mapping Approach
I focused only on verified, ground-truth information and avoided copying from copyrighted sources. All additions were based on direct familiarity with the area.
🔹 Quality Checks
Ensured the point was not placed on the roadway Confirmed correct spelling and capitalization Reviewed surrounding features for consistency
🔹 Objective
The goal was to improve local POI completeness and contribute accurate, structured data to OpenStreetMap. This is part of my effort to make consistent, quality-focused contributions rather than large, unverified edits.
In one of my previous blog posts, I explored how to read live vehicle data through the OBD II port that is present in most (modern) cars. As mentioned in the outlook, the next step in my project is to combine vehicle telemetry with (accurate) positional information in order to enable more advanced analysis. To achieve this, I created a small GNSS test setup. The platform for all experiments is again a Raspberry Pi. For a first comparison, I selected two GNSS boards from Waveshare: the L76X GPS HAT and the ZED F9X GPS RTK HAT.

Why these two modules?
The L76X is an inexpensive entry level device that is suitable for navigation, mapping or general position tracking. It supports GPS and BDS and normally delivers a position accuracy of a few meters. The ZED F9X belongs to a completely different class. It is a multi band GNSS receiver that supports real time kinematic (RTK) processing. When correction data is available, it can reach accuracy in the range of centimeters, which makes it suitable for robotics, surveying, precision agriculture or any application that requires very accurate geolocation data. The antenna systems also show clear differences. The L76X includes a simple single band GPS antenna, while the ZED F9X works together with a multi band active GNSS antenna that allows reception of several frequency ranges at once. This antenna design is essential for achieving the high accuracy that the ZED F9X is capable of.
From the provided software to writing my own scripts
Both modules are delivered with example software and Python scripts on the manufacturer web pages. I tried using these examples first, but outdated Python versions and older code libraries quickly created compatibility problems. Because of this I moved directly to writing my own scripts, which turned out to be the better choice later on. The L76X operates at one update per second in its default configuration, but it can be configured to send up to ten updates per second. The ZED F9X can operate with even higher update rates, in some cases up to twenty five updates per second depending on the selected messages. However, not every communication protocol supports these higher update rates. I started with NMEA, which worked well up to ten updates per second. Above that limit the protocol becomes inefficient because the messages are relatively large. For the ZED F9X, switching to UBX made much more sense because UBX uses compact binary messages. Unfortunately the L76X does not support UBX, which means NMEA remains the only option for that board.
What comes next?
With the hardware and software configured and with automated startup and first measurement routines working reliably, the next step will be real world testing inside a car. In particular, I want to find out how the speed of the vehicle affects the quality of the GNSS measurements, how different surroundings such as hills, forests and tall buildings influence the accuracy, and how big the practical performance gap is between the simple L76X with its basic antenna and the ZED F9X combined with a multi band active antenna.
Hi everyone,
I recently noticed that many modern pedestrian crossings are equipped with automatic detection sensors that trigger the traffic signal without requiring a push button.
Currently, in OpenStreetMap, we can tag:
highway=crossing and crossing=traffic_signals for signalised c
Hi everyone,
I recently noticed that many modern pedestrian crossings are equipped with automatic detection sensors that trigger the traffic signal without requiring a push button.
Currently, in OpenStreetMap, we can tag:
highway=crossing and crossing=traffic_signals for signalised crossingsbutton_operated=yes/no to indicate if a manual button is presenttraffic_signals:sound=yes/no for auditory signalsHowever, there is no standard way to indicate automatic activation by a detector for pedestrians or vehicles.
To address this, I have proposed a new tag on the OSM forum: detector_operated=yes/no, which would clearly indicate that a traffic signal is automatically triggered by a detector.
You can view and comment on the proposal here: https://community.openstreetmap.org/t/proposal-tag-traffic-signals-detector-operated-pedestrian-presence-sensor/141624
Here is an example illustration showing automatic pedestrian detection:

This tag would help improve mapping of intersections, pedestrian routing, traffic simulation, and accessibility information.
I’d love to hear your thoughts and experiences with automatic pedestrian detection at crossings in your area!
In the second 2026 edition of our OpenStreetMap interview series it was my pleasure to chat with Nicolas Collignon, co-founder and CEO of Kale AI, who are building urban routing solutions for delivery using OpenStreetMap.
I’m Nico, my background is in computational cognitive science. I’m now the CEO of Kale AI, a start up building technology for urban logistics planning. I initially got into OpenStreetMap during a side quest where I got really curious about how to better understand urban tissue, and how to represent it computationally.
Kale AI is a company focused on solving the inefficiency problem in urban logistics. We build tools to make complex logistics planning easy. It’s a very hard and interesting problem, and planning is one of the biggest weaknesses of LLMs. We’ve been focused on supporting the transition to Light EVs and cargo-bikes in modern urban logistics fleets. Light EVs are up to 2x more efficient in dense urban areas and use 95% less energy than diesel vans. They’re a multi-solution to improve urban life.
Different vehicles need tailored routing because urban space is becoming increasingly complex. With improving cycling infrastructure, Low Traffic Neighbourhoods and so on, all of this can lead to improved efficiency if we better route vehicles through street networks. For example, a 2-wheeled cargo bike might be able to take a shortcut that a 3-wheeler is blocked from by a bollard. For the 2-wheeler that can save 5-10 minutes off their route, but having to backtrack could add this in additional time for the slightly larger vehicle.
Most of our work doesn’t focus specifically on “navigation” but on planning, assigning deliveries to vehicles and designing the sequence of stops on those routes. Dantzig, who first proposed the Vehicle Routing Problem, explains quite well why it’s hard in his 1958 paper: “Even for small values of n the total number of routes is exceedingly large, e.g. for n = 15, there are 653,837,184,000 different routes.”
In our research, we found that deliverers spend 60-80% of their day not driving, but looking for parking and walking to the door. Different vehicles have different performance advantages in different parts of a city. Light EVs have a big advantage in the centre. Our work focuses on leveraging the different strengths of each vehicle type, and taking into account that diversity makes the VRP even harder to solve.
The data quality is surprisingly good in well-mapped areas. The OSM community is incredibly detail-oriented. But two challenges stand out for us.
The first is completeness and heterogeneity. Coverage varies enormously, not just between cities but within them, and sometimes between streets that are literally 300 metres apart. In our research we found a striking example in Boston where two neighbouring hexagonal cells with almost identical satellite imagery had wildly different tagging. One had 167 highway:service tags, the other just 3. In Chicago suburbs we found a municipality with the highest population density in Illinois where OSM had recorded only 8% of its buildings. That kind of patchiness is a real problem when you’re trying to build models that generalise across cities.
The second is semantic consistency. OSM relies on contributors to categorise things freely, which means the same real-world object can be tagged in multiple ways depending on who mapped it and where. We saw this clearly across our study cities. Contributors in Los Angeles tagged single-family homes as building=house, while the same homes in other cities were tagged with the catch-all building=yes. Locally that’s fine, but the moment you try to build a model that works across cities, those inconsistencies become noise you have to work around.
And beyond the map itself, OSM captures the physical world but not the operational reality of deliveries. How long it takes to park, unload, walk to a door varies enormously by urban context and is invisible to any map. In our research, service time turned out to be one of the biggest drivers of delivery efficiency, yet almost no publicly available data exists on it. That’s a gap OSM can’t fill alone, but it points to how much logistics-specific ground truth is still missing.
Keep tagging surfaces, seriously. It might feel niche, but it’s one of the most operationally significant pieces of data we use. The granularity OSM brings to surface data is something you simply can’t get from commercial providers, and it makes a real difference in planning accuracy.
Beyond that, access restrictions need more attention: bollards, width restrictions, turning restrictions, loading zone locations. These are the invisible barriers that can completely change how a fleet operates in a city, and they’re often missing or under-tagged. A restriction that a small vehicle sails through might stop a larger one entirely, and right now OSM rarely has enough detail to distinguish those cases.
More broadly, mapping Low Traffic Neighbourhoods and filtered permeability in a consistent, machine-readable way would be hugely valuable. These are increasingly shaping how urban freight actually moves, and having reliable structured data on them would let us plan far more accurately.
I think OSM is going to become even more foundational than it already is, but probably in ways that are less visible. A lot of the most interesting work being done today in autonomous mobility, urban planning, and logistics quietly depends on OSM as a base layer. That’s only going to grow.
What excites me is the intersection with AI. Models are getting better at extracting structured data from imagery, which could dramatically accelerate how quickly OSM reflects the real world: new infrastructure, surface changes, new access restrictions. The community’s role might shift from purely manual contribution toward curation and validation at scale.
And as cities get more complex, with more vehicle types, more restricted zones, more differentiated infrastructure, the value of a community that actually cares about tagging a bollard correctly becomes hard to overstate. That local, granular knowledge is something no corporate mapping effort has ever quite replicated.
Thank you, Nico! Wonderful to see OpenStreetMap becoming a core part of the infrastructure of modern cities. As people, companies, communities use and rely on OSM, they will in turn start editing and maintaining the data for all of us to benefit.
Forward!
Please let us know if your community would like to be part of our interview series here on our blog. If you are or know of someone we should interview, please get in touch, we’re always looking to promote people doing interesting things with open geo data.
In this changeset (176210161), I focused on improving building-level mapping by adding missing building outlines and refining structural details using Bing Maps aerial imagery.
The objective of this session was to enhance spatial accuracy and improve map completeness in the area. I ensured that:
In this changeset (176210161), I focused on improving building-level mapping by adding missing building outlines and refining structural details using Bing Maps aerial imagery.
The objective of this session was to enhance spatial accuracy and improve map completeness in the area. I ensured that:
This edit was completed using the iD editor (v2.37.3), and I requested a review to ensure quality validation and community feedback.
Working on building details helped strengthen my understanding of:
I will continue improving structured building data and map quality in Karnataka.
Today, I worked on improving map data around Yelahanka Taluku, Karnataka. I updated the official name of Sai Vidya Institute of Technology to reflect accurate real-world information and ensured proper tagging consistency.
In addition to correcting the name, I reviewed campus boundary structure, building tagging, and surrounding infrastructure to avoid dupli
Today, I worked on improving map data around Yelahanka Taluku, Karnataka. I updated the official name of Sai Vidya Institute of Technology to reflect accurate real-world information and ensured proper tagging consistency.
In addition to correcting the name, I reviewed campus boundary structure, building tagging, and surrounding infrastructure to avoid duplication and maintain data integrity. I verified that the edits align with real-world sources and OSM tagging standards.
My focus during this session was on:
This session helped reinforce the importance of precise tagging, version tracking, and reviewing live map data versus cached tiles. I will continue contributing to improving structured geospatial data across Karnataka.
✅ Info : Problème résolu
Le bloc de béton obsolète à Toulouse (coordonnées : 43,5615376 ; 1,4920996) a été supprimé dans OpenStreetMap.
L’itinéraire cyclable est désormais correct sur Geovelo, sans détour inutile.
Contexte : Le 17 février 2026, j’ai résolu la note #5169818 signalant un problème d’itinéraire cyclable à Toulouse (coordonnées : 43,
✅ Info : Problème résolu
Le bloc de béton obsolète à Toulouse (coordonnées : 43,5615376 ; 1,4920996) a été supprimé dans OpenStreetMap.
L’itinéraire cyclable est désormais correct sur Geovelo, sans détour inutile.
Contexte : Le 17 février 2026, j’ai résolu la note #5169818 signalant un problème d’itinéraire cyclable à Toulouse (coordonnées : 43,5615376 ; 1,4920996). Un bloc de béton obsolète (après la fin des travaux) provoquait un détour inutile sur les calculs d’itinéraire.
Actions réalisées : - Correction dans OSM : suppression de l’obstacle (changeset #178691426). - Attente de la mise à jour des données par Geovelo.
Lien direct pour tester : Geovelo - Itinéraire test
À faire : - Vérifier vers le 19 mars 2026 si l’itinéraire est corrigé sur Geovelo/OSRM. - Si le problème persiste, rouvrir la note ou contacter Geovelo.
Localisation : Voir sur OSM
#OpenStreetMap #Toulouse #Vélo #Contribution #Geovelo
Di seguito le persone coinvolte nel progetto: “Pesaro ha bisogno di te!”.
A un gruppo di studenti e studentesse è stato chiesto di incrementare il livello di precisione e accuratezza della mappa nella loro città (e dintorni, alcune e alcuni vivono in zone limitrofe).
Tutte le modifiche saranno ritenute valide se, e solo se, riporteranno l’hashtag #PCT
Di seguito le persone coinvolte nel progetto: “Pesaro ha bisogno di te!”.
A un gruppo di studenti e studentesse è stato chiesto di incrementare il livello di precisione e accuratezza della mappa nella loro città (e dintorni, alcune e alcuni vivono in zone limitrofe).
Tutte le modifiche saranno ritenute valide se, e solo se, riporteranno l’hashtag #PCTOMarconi2026 e verranno effettuate dagli utenti coinvolti nel progetto di seguito elencati (viene riportato solo il nome utente).
⚠️ In caso di necessità, vi prego di contattarmi via email <[email protected]> o su Telegram (@galessandroni). In questi canali sono più reattivo rispetto alla messaggistica interna.
Naturalmente, sentitevi liberi di correggere qualsiasi vandalismo refuso doveste notare.
| N | Utente (attività) | Modifiche |
|---|---|---|
| 0 | Galessandroni | Tutor |
| 1 | _basii | 0 |
| 2 | ||
| 3 | CoolCastle561 | 0 |
| 4 | ||
| 5 | ||
| 6 | Lorenzo-Cecchini | 0 |
| 7 | FedericoCrine | 0 |
| 8 | ||
| 9 | ANTOHH | 0 |
| 10 | ||
| 11 | Pit-_- | 2 |
| 12 | ||
| 13 | Roberto Fazzini | 0 |
| 14 | ||
| 15 | ga gasparri | 0 |
| 16 | ||
| 17 | dadograss | 0 |
| 18 | ||
| 19 | ||
| 20 | Ariannapagnoni | 36 |
| 21 | ||
| 22 | santa222 | 23 |
| 23 | ele stefanini | 4 |
| 24 | ||
| 25 | ||
| 26 | davide zagaria | 0 |
Ultimo aggiornamento: 20 febbraio 2026
♦
Many people have noticed that publicly available Overpass servers have been suffering from overuse (a typical “tragedy of the commons”). OSM usage policies generally contain the line “OpenStreetMap (OSM) data is free for everyone to use. Our tile servers are not”. Unfortunately, there have been problems with overuse of the public Overpass servers, despite the usage policy. “Just blo

Many people have noticed that publicly available Overpass servers have been suffering from overuse (a typical “tragedy of the commons”). OSM usage policies generally contain the line “OpenStreetMap (OSM) data is free for everyone to use. Our tile servers are not”. Unfortunately, there have been problems with overuse of the public Overpass servers, despite the usage policy. “Just blocking cloud providers” isn’t an option, because (see here - use the translate button below) lots of different sorts of IP addresses, including residential proxy addresses, are the problem.
People who want to use e.g. Overpass Turbo do have the option to point it at a different Overpass API instance. If you’re using Overpass Turbo and you get an error due to unavailability, likely that is because the Overpass API that it is using is overwhelmed. There are other public Overpass API instances, but they may be complete (in terms of geography, or history) or up to date.
At this point, if you’re one of the people who created the problem you’ll likely just spin up more instances to retry after timeouts and make the problem worse. Most people reading this are I hope not in that category. There are commercial Overpass API providers - more details for the example in that table can be found here.
Other people (including me) might wonder whether it’s possible (without too much work) to set up an Overpass API server that just covers one or two countries. To keep it simple, let’s restrict myself to Britain and Ireland 2.3GB in OSM, and let’s not worry about Attic Data (used for “queries about what was in OSM in the past”) or metadata.
Let’s just try and do regular Overpass queries such as you might start from this taginfo page, like this. I’ll also only target the Overpass API, and will use “settings” in an Overpass Turbo instance to point to my Overpass API server. I do want to apply updates as OSM data is changed.
I’m interested in creating a server covering the UK and Ireland. In terms of size, have a look at how much bigger or smaller your area of interest is than the 2.3GB of Britain and Ireland below and use that to judge what size server you might need.
At this point it’s perhaps worth mentioning that the documentation around Overpass is … (and I’m channelling my inner Sir Humphrey here) “challenging”.
There’s the OSM wiki which talks about “Debian 6.0 (squeeze) or Debian 7.0 (wheezy)”, the latter of which went EOL in May 2018. There is also an HTML file on overpass-api.de. That is … (engages Sir Humphrey mode again) not entirely accurate, in that it says to run something that doesn’t exist if you’ve cloned the github repository.
One of the best document by far is external and is by ZeLonewolf, which starts off by saying “I found the existing guides to be lacking”. It then says “This is a combination of various other guides, the official docs, Kai Johnson’s diary entry…”
(which is the other “best document”)
“… and suggestions from @mmd on Slack. This guide is intended to demonstrate how to configure a server dedicated to only running overpass”. Kai’s diary entry from 2023 is definitely worth reading (sample quote there “Running an Overpass server is not for the faint of heart. The software is really finicky and not easy to maintain. You need to have some good experience with Linux system administration and the will and patience to deal with things that don’t work the way they’re supposed to”).
Also, this github issue (and things linked from it) summarises some of the issues that I had on the way to getting my test server set up.
Where what I’m doing below differs from what the other guides say I’ll try and say why I am doing it differently. Usually it’s because my requirements are different (e.g. an overpass server for a small area rather than everywhere, on a VSP rather than a piece of tin, or because I need limited functionality).
For my use case, we’ll need a server that is publicly accessible on the internet to do this. I’m already a customer of Hetzner, so I’ll create a test server there. Other providers are available, and may make more sense depending where you are in the world and how much you want to pay. For testing, spinning up something at one of the hyperscalers might make financial sense, but I suspect not long-term. I went with a CX43 with 160GB of SSD disk space, 16GB RAM and a rather large amount of bandwidth. This turned out to be about the right size for Britain and Ireland. I went with Debian 13 and public ipv4 and ipv6 addresses. I don’t know if Overpass releases need a particular architecture, but went with “x86” rather than “ARM” just in case.
If you’re needs are different you don’t have to use a cloud server for this. and Kai’s diary entry has a lot of information about physical server sourcing and setup.
Sizing was alas largely guesswork and trial and error - while I’m sure that the commercial providers know chapter and verse on this, there isn’t a lot written down about “sizing based on extract size” that isn’t “how long is a piece of string”. I found that loading even North Yorkshire (just 56MB in OSM) created a nodes file in the database area of 23GB, so that sets the minimum server size, even for very small test extracts.
The speed of the disk used needs to be able to apply updates in less time than the updates are of. If it takes 2 hours to apply 1 hour of updates, your server will never catch up. In practice I didn’t find this to be an issue with the servers at Hetzer and the relatively small extracts that I was working with.
In what follows I’ll use youruseraccount, yourserver and yourdomain in place of the actual values I used.
I already have some ssh keys stored at Hetzner, so when buying the server, I chose a new name in the format “yourserver.yourdomain” and added my ssh keys. I have yourdomain registered at a DNS provider, and I added the IPV4 and IPV6 addresses there. I can now ssh in as root to “yourserver.yourdomain”, and run the usual:
ssl -l root yourserver.yourdomain
apt update
apt upgrade
and bounce the server and log back in again.
The next job is to create a non-root account for regular use and add it to the “sudo” group:
useradd -m youruseraccount
usermod -aG sudo youruseraccount
chsh -s /bin/bash youruseraccount
I’ll create a new password in my password manager for youruseraccount on this server (obviously I used my account name rather than actually youruseraccount, but you get the idea…). Next, set the new account password to the newly chosen password
passwd youruseraccount
and check I can login to the new server as youruseraccount with that password, and become root:
ssh -l youruseraccount yourserver.yourdomain
sudo -i
exit
Install some initial software:
sudo apt install emacs-nox screen git tar unzip wget bzip2 net-tools curl apache2 wget g++ make expat libexpat1-dev zlib1g-dev libtool autoconf automake locate
That list includes both software prereqquisites (apache2) and things that will be really useful (screen). It also includes emacs as a text editor; you can use your preferred one instead wherever emacs is mentioned below.
To use screen you just type screen and then press return. You can manually detach from it by using ^a^d and later reattach by using “screen -r”. If there are multiple screens you can attach to you’ll see something like this:
There are several suitable screens on:
95207.pts-2.h23 (02/15/2026 09:20:20 AM) (Detached)
95200.pts-2.h23 (02/15/2026 09:19:57 AM) (Detached)
1633.pts-2.h23 (02/14/2026 12:37:50 PM) (Attached)
Type "screen [-d] -r [pid.]tty.host" to resume one of them.
and you can choose which one to reconnect to by typing in (say) “95207” and pressing “tab”. To force a reconnection to a screen that something else is attached to, use “screen -d -r”.
In many cases below I’ll say “(in screen)” - this just means it’s a good idea to run these commands from somewhere that you can detach from and reattach to. It doesn’t mean you need to create a new screen every time.
The ssh keys that I had stored have been added for root by Hetzner, but I also want to add them to my new account too:
sudo -i
sudo -u youruseraccount -i
ssh-keygen -t rsa
(either use existing password for ssh passphrase, or create and store a new one)
exit
cp /root/.ssh/authorized_keys /home/youruseraccount/.ssh/
emacs /home/youruseraccount/.ssh
… and in there change the ownership of the files to youruseraccount.
Next, check that you can ssh in to yourserver.yourdomain without a password. Next disable regular password access. We don’t want people to be able to brute force password access to a server on the internet, so we can just turn this off.
sudo emacs /etc/ssh/sshd_config
Find the line that says
# To disable tunneled clear text passwords, change to "no" here!
and uncomment and change the next two lines to say
PasswordAuthentication no
PermitEmptyPasswords no
save the file and then
sudo /etc/init.d/ssh restart
and then try and login (from the shell on that machine will work as a test)
ssh 127.0.0.1
It should say Permission denied (publickey).
Setting up a certificate is the next priority. Everything on the internet these days pretty much assumes https access, so let’s do that before even thinking about overpass. I’ll use acme.sh for that. Other providers and tooling are available and you can use them if you prefer. Login as your non-root account and then:
sudo -i
cd
wget -O - https://get.acme.sh | sh -s email=youremailaddress
exit
sudo -i
/etc/init.d/apache2 stop
acme.sh --standalone --issue -d yourserver.yourdomain -w /home/www/html --server letsencrypt
the last lines of the output you get should be like
-----END CERTIFICATE-----
[Sat Feb 14 12:51:45 AM UTC 2026] Your cert is in: /root/.acme.sh/yourserver.yourdomain_ecc/yourserver.yourdomain.cer
[Sat Feb 14 12:51:45 AM UTC 2026] Your cert key is in: /root/.acme.sh/yourserver.yourdomain_ecc/yourserver.yourdomain.key
[Sat Feb 14 12:51:45 AM UTC 2026] The intermediate CA cert is in: /root/.acme.sh/yourserver.yourdomain_ecc/ca.cer
[Sat Feb 14 12:51:45 AM UTC 2026] And the full-chain cert is in: /root/.acme.sh/yourserver.yourdomain_ecc/fullchain.cer
Next do
sudo a2ensite default-ssl
sudo a2enmod ssl
sudo systemctl reload apache2
and then edit the default site config
sudo emacs /etc/apache2/sites-enabled/default-ssl.conf
Replace the SSL references with the correct ones.
SSLCertificateFile /root/.acme.sh/yourserver.yourdomain_ecc/fullchain.cer
SSLCertificateKeyFile /root/.acme.sh/yourserver.yourdomain_ecc/yourserver.yourdomain.key
Restart apache
sudo systemctl restart apache2
and browse to https://yourserver.yourdomain to make sure that the certificate is working. You’ll need to arrange for that certificate to be renewed every couple of months, but let’s concentrate on overpass for now.
That is it for the initial server setup, so now would be a good time for a server snapshot or other sort of backup.
For this part, we’re going to follow parts of ZeLoneWolf’s guide. I’ve reproduced that mostly as written below, although some of the software already was installed earlier.
sudo su
mkdir -p /opt/op
groupadd op
usermod -a -G op youruseraccount
useradd -d /opt/op -g op -G sudo -m -s /bin/bash op
chown -R op:op /opt/op
apt-get update
apt-get install g++ make expat libexpat1-dev zlib1g-dev apache2 liblz4-dev curl git
a2enmod cgid
a2enmod ext_filter
a2enmod headers
exit
The username that we created above is “op”. We won’t use a password for that but will just use
sudo -u op -i
when we need to change to it from our normal user account.
We already have Apache set up with a default HTTPS website that says “It works!”. We’ll use some of what’s in ZeLoneWolf’s Guide but we DON’T want to completely replace our config with that one. Instead we’ll selectively copy in some sections. Edit the file as is:
sudo emacs /etc/apache2/sites-available/default-ssl.conf
Note that we are using https with the defaults and the filename is different to the example.
Find this line:
DocumentRoot /var/www/html
and after it insert this section:
# Overpass API (CGI backend)
ScriptAlias /api/ /opt/op/cgi-bin/
<Directory "/opt/op/cgi-bin/">
AllowOverride None
Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
Require all granted
# CORS for Overpass Turbo
Header always set Access-Control-Allow-Origin "*"
Header always set Access-Control-Allow-Methods "GET, POST, OPTIONS"
Header always set Access-Control-Allow-Headers "Content-Type"
</Directory>
# Compression (for API responses)
ExtFilterDefine gzip mode=output cmd=/bin/gzip
# Logging
ErrorLog /var/log/apache2/error.log
LogLevel warn
CustomLog /var/log/apache2/access.log combined
# Long-running Overpass queries
TimeOut 300
I then deleted a bunch of lines, all comments of functional duplicates of what we had just added, down to but not including:
# SSL Engine Switch:
Save and restart apache:
sudo /etc/init.d/apache2 restart
and check that you can still browse to “https://yourserver.yourdomain”. It won’t look any different as the default website has not been changed; we’ll test the “cgi-bin” parts later.
This is drawn directly from ZeLoneWolf’s guide. Note that this does NOT clone the github repository and build it locally. At the time of writing the latest version is “v0.7.62.10” so you’ll see that number below.
sudo su op
cd
wget https://dev.overpass-api.de/releases/osm-3s_latest.tar.gz
tar xvzf osm-3s_latest.tar.gz
cd osm-3s_v0.7.62.10/
time ./configure CXXFLAGS="-O2" --prefix=/opt/op --enable-lz4
That took 5s when I ran it. Next:
time make install
That took 9 minutes. Next:
cp -pr cgi-bin ..
cd
chmod -R 755 cgi-bin
mkdir db
mkdir diff
mkdir log
cp -pr osm-3s_v0.7.62.10/rules db
Those three directories created are for the database, minutely diff files and logfiles. In operation, the biggest by far will be “db” - we’ll expect 2.3GB of .pbf extract to create a database of initially 80GB or so. We’ll talk more about this later.
The equivalent section of ZeLoneWolf’s guide is called “Download the Planet”. We don’t actually want to do that - we just want a data extract for our area of interest.
I’ll download a Geofabrik extract in my normal user account and make sure that it is accessible to the “op” user. Firstly browse to (in may caase) https://download.geofabrik.de/europe/britain-and-ireland.html . There is a link there to https://download.geofabrik.de/europe/britain-and-ireland-latest.osm.pbf anf a comment that says something like “This file was last modified 22 hours ago and contains all OSM data up to 2026-02-12T21:23:29Z”.
When logged in as youruseraccount:
mkdir ~/data
cd ~/data
time wget https://download.geofabrik.de/europe/britain-and-ireland-latest.osm.pbf
I then moved the file so that the filename contained the timestamp
mv britain-and-ireland-latest.osm.pbf britain-and-ireland_2026-02-12T21:23:29Z.osm.pbf
That is a .pbf format download - that format was introduced to OSM around 2010 and is basically pretty standard now. Unfortunately, Overpass still needs the previously used .bz2 format, but we can convert it:
(in screen)
sudo apt install osmium-tool
time osmium cat britain-and-ireland_2026-02-12T21\:23\:29Z.osm.pbf -o britain-and-ireland_2026-02-12T21\:23\:29Z.osm.bz2
That took around 1 hour 20 minutes (and frustratingly the progress bar looks like it was written by someone from Windows 2000) - don’t cancel it if it appears to be stuck, instead have a look to see if it is actually writing out a file. If you want to verify the resulting file:
(in screen)
time bzip2 --test britain-and-ireland_2026-02-12T21\:23\:29Z.osm.bz2
That took around 11 minutes for me.
Still as youruseraccount, make the download area browsable via the “op” user”::
chmod o+rx ~
chmod o+rx ~/data
If you’re not comfortable with this then you can of couurse copy or more the file as root later.
This is based on ZeLoneWolf’s guide again, which in turn is using scripts that Kai Johnson wrote.
As the overpass user:
mv bin bin.bak && mkdir bin
git clone --depth=1 https://github.com/ZeLonewolf/better-overpass-scripts.git bin
rm -rf bin/.git
and we’ll need to copy some things from the build into that directory. This will include at least:
cp /opt/op/osm-3s_v0.7.62.10/bin/update_database bin/
cp /opt/op/osm-3s_v0.7.62.10/bin/update_from_dir bin/
cp /opt/op/osm-3s_v0.7.62.10/bin/osm3s_query bin/
cp /opt/op/osm-3s_v0.7.62.10/bin/dispatcher bin/
but I actually copied everything missing from the new “bin” directory. We installed “locate” above. If anything hs been inadvertantly missed you can use e.g. “locate nameofmissingthing” and it will find it. This is a bit messy, and it’d be great to have something that’s a bit more solid and has less of the “porcine face paint applicator” feel to it; but I did not want to go too far down that road as I was trying to set something up “without too much work”.
We’re going to load a data extract from Geofabrik, and we’d also like to be able to update it with changes as other people update OSM. Normally the workflow that I’d suggest for this sort of thing is to download minutely updates from https://planet.osm.org, use trim_osm.py to snip them down to the area that we’re interested in and then apply those as updates.
By default, Overpass does run with planet.osm.org minutely diffs but alas I’ve struggled to get those to work with a data extrct; the updater falls over when it finds certain sorts of data that it is not expecting (i.e. was never originally loaded) in diff files. However, Geofabrik does provide daily diff files that match their extracts, so we can use those instead.
Also, we’re only interested in “now” data - we’re not creating an Overpass server with “attic” data that allows us to query data from back in 2012.
We therefore have to make a bunch of changes to scripts.
In there, we will change “https://planet.openstreetmap.org/replication/minute” to “https://download.geofabrik.de/europe/britain-and-ireland-updates”.
We’ll change --meta=attic to --meta=no because we’re not doing anything with “attic” data.
We’ll remove the --attic from the “dispatcher” call.
We’ll change EXPECTED_UPDATE_INTERVAL from 57 to 3557 or even longer. We’re expecting files once a day not once a minute, but checking every hour is not too bad.
There’s a section in ZeLoneWolf’s guide that covers this.
Log files will eventually grow large and will eventually need a log rotation mechanism to be set up, but let’s gloss over that for now as I’m eager to see Overpass actually running!
See ZeLoneWolf’s guide.
I have deliberately not done this yet as I don’t want to automatically do anything; rather I’d like to control it manually so that I can watch that it does what it is supposed to.
(in screen)
time bin/init_osm3s.sh /home/youruseraccount/data/britain-and-ireland_2026-02-12T21\:23\:29Z.osm.bz2 "db/" "./" --meta=no
That took about 77 minutes for me. Lots of files will have been created in “db”. A quick check on disk usage is in order:
df .
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 157207480 76407544 74363468 51% /
op@h23:~$ fc du
du -BG db/* | sort -n -r | head
53G db/nodes.map
9G db/ways.map
3G db/ways.bin
3G db/nodes_meta.bin
3G db/nodes.bin
2G db/way_tags_global.bin
2G db/ways_attic.map
2G db/nodes_attic.map
1G db/way_tags_local.bin.idx
1G db/way_tags_local.bin
It’s worth noting that those are large numbers for an extract. The 2.3GB data extract has created a 53GB nodes.map file. Compression is supported, but I haven’t tested it.
There’s a file in the db directory (which will be created if it does not already exist) that determines the place to start consuming diffs from. These vary by server; the number corresponding to planet.osm.org replication from a certain data will different to the one for Geofabrik replication for the same date.
In our example we’re using Geofabrik data from 12th Feb 2026. We can browse through https://download.geofabrik.de/europe/britain-and-ireland-updates/ and https://download.geofabrik.de/europe/britain-and-ireland-updates/000/004/ until we find the immediately prior state file https://download.geofabrik.de/europe/britain-and-ireland-updates/000/004/693.state.txt , which contains sequenceNumber=4693. This means that 4693 is our magic number.
We’ll therefore edit the replicate_id file (creating it if it does not exist) and write 4693 (with a linefeed after) to it.
Before we do anything else, now is a good opportunity for another snapshot.
If this isn’t the first time you’ve started overpass you may want to take backup copies of previous “diff” directories or “log” files. Then:
bin/startup.sh
You should see something like this:
[2026-02-15 12:43:14] INFO: Starting Overpass API components...
[2026-02-15 12:43:14] INFO: Starting base_dispatcher...
[2026-02-15 12:43:14] INFO: Cleaning up stale files...
[2026-02-15 12:43:14] INFO: base_dispatcher is running (PID: 107771)
[2026-02-15 12:43:14] INFO: Starting area_dispatcher...
[2026-02-15 12:43:14] INFO: area_dispatcher is running (PID: 107783)
[2026-02-15 12:43:14] INFO: Starting apply_osc...
[2026-02-15 12:43:14] INFO: apply_osc is running (PID: 107795)
[2026-02-15 12:43:14] INFO: Starting fetch_osc...
[2026-02-15 12:43:14] INFO: fetch_osc is running (PID: 107835)
[2026-02-15 12:43:14] INFO: Performing final verification...
[2026-02-15 12:43:16] INFO: base_dispatcher verified (PID: 107771)
[2026-02-15 12:43:17] INFO: area_dispatcher verified (PID: 107783)
[2026-02-15 12:43:17] INFO: apply_osc verified (PID: 107795)
[2026-02-15 12:43:17] INFO: fetch_osc verified (PID: 107835)
[2026-02-15 12:43:17] INFO: All Overpass components started successfully
[2026-02-15 12:43:17] INFO: === Process Status ===
base_dispatcher PID: 107771
area_dispatcher PID: 107783
apply_osc PID: 107795
fetch_osc PID: 107835
In the directories below “diff”, you should see that it has downloaded daily diffs for any days since your extract, for example:
/opt/op/diff/000/004: (56 GiB available)
drwxrwxr-x 2 op op 4096 Feb 16 01:07 .
-rw-rw-r-- 1 op op 3874289 Feb 16 01:07 697.osc.gz
-rw-rw-r-- 1 op op 113 Feb 16 01:07 697.state.txt
-rw-rw-r-- 1 op op 3033325 Feb 15 12:43 696.osc.gz
-rw-rw-r-- 1 op op 3405594 Feb 15 12:43 695.osc.gz
-rw-rw-r-- 1 op op 3057997 Feb 15 12:43 694.osc.gz
-rw-rw-r-- 1 op op 113 Feb 15 12:43 695.state.txt
-rw-rw-r-- 1 op op 113 Feb 15 12:43 696.state.txt
-rw-rw-r-- 1 op op 113 Feb 15 12:43 694.state.txt
drwxrwxr-x 3 op op 4096 Feb 15 12:43 ..
In “log” you should see something like:
/opt/op/log: (56 GiB available)
-rw-rw-r-- 1 op op 12111701 Feb 16 23:39 apply_osc_to_db.out
drwxr-xr-x 13 op op 4096 Feb 16 20:14 ..
drwxrwxr-x 2 op op 4096 Feb 15 12:43 .
-rw-rw-r-- 1 op op 0 Feb 15 12:43 osm_base.out
-rw-rw-r-- 1 op op 0 Feb 14 14:26 fetch_osc.out
-rw-rw-r-- 1 op op 0 Feb 14 14:26 areas.out
At the command line type:
bin/osm3s_query
Paste in this:
<query type="nwr"><bbox-query n="51.96" s="51.86" w="-3.31" e="-3.22"/><has-kv k="amenity" v="pub"/></query><print/>
Press return. Press ^d. A selection of data will be returned.
In a web browser, browse to https://overpass-turbo.eu/s/2kEW .
Click “settings”. Change “server” from “https://overpass-api.de/api/” to “https://yourserver.yourdomain/api/”. Click “run”. You should not get an error, and should get a couple of nodes and 4 ways returned.
For the avoidance of doubt - if you browse to “https://yourserver.yourdomain/” you’ll get some sort of “It works!” page. If you browse to “https://yourserver.yourdomain/api/” you’ll actually get an error - it’s designed to be accessed (see the CORS settings above) by Overpass Turbo, not a regular browser.
Shutting everything down and taking a snapshot of the server is a good idea at this point. The long-term cost of snapshots is small (€0.20 per month or so). The cost of leaving a server of this specification running 24x7 isn’t that large - around €10, perhaps a couple of beers or a couple of fancy coffees.
You might also want to think about setting up an Overpass server that does include metadata and attic data - but you’re probably better off with a dedicated server for that, and better off following one of the other guides linked above.
Edit: Minor clarification re use of Overpass API URL following a question on IRC.
Hello! This is my first Diary Entry and I wanted to dedicate it to the Forum Post that I made about the UKs Only (I Believe) ER OUT routes in the case of any emergencies: mainly flooding in this case.
OverviewAfter major flooding in 2013 the council created the Lincolnshire ER Routes to enable people to quickly evacuate from the flood areas. Many of you make
Hello! This is my first Diary Entry and I wanted to dedicate it to the Forum Post that I made about the UKs Only (I Believe) ER OUT routes in the case of any emergencies: mainly flooding in this case.
After major flooding in 2013 the council created the Lincolnshire ER Routes to enable people to quickly evacuate from the flood areas. Many of you make have driven past these and never even noticed! They are Red rectangular signs with the white text of ER out on them with a direction to follow. They are placed at every turn, so the evacuees follow the road ahead until a signs says otherwise.

The end of the route signifies that the evacuees are clear of the major flood risk and (presumably) there would be further guidance at the end of the route. The route end sign is the same as the direction signs however it features 5 black diagonal lines.

In the forum post I have included some proposed tags and along with Insert User who has suggested some changes to the signage.
I will be unable to fully map these routes out as I rarely venture to the south of Lincolnshire. If you live near one of these routes please do help to map these! I presume it will take a while to map all of the routes but I think it will be worth it in the event of any flooding within the region!
Please do not hesitate to contribute to the forum post!
Σε εφαρμογές που “πατάνε” στους Open Street Maps (σίγουρα στις Mapy. OSMand, Organic Maps, ίσως και αλλού), ως E65 εμφανίζεται ΛΑΘΟΣ ο παλιός δρόμος Λαμία-Δομοκός-Φάρσαλα-Λάρισα και όχι ΣΩΣΤΑ ο αυτοκινητόδρομος Θερμοπύλες-Καλαμπάκα (και ημιτελές Βορειότερα ως την συμβολή με την Εγνατία). Ιδίως για τους ξένους ταξιδιώτες είναι μέγα μπέρδεμα.
Σε εφαρμογές που “πατάνε” στους Open Street Maps (σίγουρα στις Mapy. OSMand, Organic Maps, ίσως και αλλού), ως E65 εμφανίζεται ΛΑΘΟΣ ο παλιός δρόμος Λαμία-Δομοκός-Φάρσαλα-Λάρισα και όχι ΣΩΣΤΑ ο αυτοκινητόδρομος Θερμοπύλες-Καλαμπάκα (και ημιτελές Βορειότερα ως την συμβολή με την Εγνατία). Ιδίως για τους ξένους ταξιδιώτες είναι μέγα μπέρδεμα.
Die FOSSGIS-Konferenz 2026 findet vom 25.-28. März 2026 in Göttingen und Online statt. Es sind nur noch wenige Wochen bis zur Konferenz. Die Vorfreude wächst stetig und die Vorbereitungen laufen auf Hochtouren!
Die Konferenz wird vom gemeinnützigen FOSSGIS e.V, der OpenStreetMap Community in Kooperation mit dem Geographischen Institut der Georg-August-Universität Göttingen organisiert u
Die FOSSGIS-Konferenz 2026 findet vom 25.-28. März 2026 in Göttingen und Online statt. Es sind nur noch wenige Wochen bis zur Konferenz. Die Vorfreude wächst stetig und die Vorbereitungen laufen auf Hochtouren!
Die Konferenz wird vom gemeinnützigen FOSSGIS e.V, der OpenStreetMap Community in Kooperation mit dem Geographischen Institut der Georg-August-Universität Göttingen organisiert und findet auf dem Campus der Uni Göttingen statt.
Auch in diesem Jahr zeichnet sich ein großes Interesse an der Konferenz ab. Die Anmeldungen steigen von Woche zu Woche. Zum Glück bietet das Zentrale Hörsaalgebäude der Uni Göttingen ausreichend Platz, so dass es die bisher größte FOSSGIS-Konferenz werden könnte.

Das FOSSGIS Team freut sich auch in diesem Jahr auf ein spannendes Programm mit zahlreichen Vorträgen, ExpertInnenfragestunden, Demosessions, BoFs und Anwendertreffen und sowie 28 Workshops. Das Konferenzprogramm findet von Mittwoch bis Freitag im Zentralen Hörsaalgebäude (ZHG) der Uni Göttingen statt. Am Samstag finden OSM-Samstag und Community Sprint an der Fakultät für Geowissenschaften und Geographie am Nordcampus statt.
Die Konferenz startet in diesem Jahr schon am Dienstag, den 24.03.2026 ab 10 Uhr mit längeren Workshops (180 Minuten). Wählen Sie unter 7 Workshops aus siehe Programm und reisen Sie schon am Dienstag an. Die Workshops sprechen sowohl Einsteiger:innen als auch Fortgeschrittene an, es sind noch Plätze frei. Buchen Sie gerne noch einen Workshop und nutzen Sie die Chance in kurzer Zeit Wissen zu einem Thema aufzubauen.
Rund um die und während der Konferenz gibt es zahlreiche Möglichkeiten sich zu vernetzen. Die Pausenversorgung kombiniert mit Firmen-Ausstellung und Poster-Ausstellung finden im Foyer des ZHG statt sowie auch die Abendveranstaltung am ersten Konferenztag. Für die fachliche Vernetzung bieten sich Gelegenheiten bei den Anwendertreffen, Expert:innenfragestunden und weiteren Community Sessions, eine Onlineteilnahme ist möglich. https://www.fossgis-konferenz.de/2026/socialevents/
In diesem Jahr freuen wir uns über ein vielseitiges Rahmenprogramm mit spannenden Exkursionen und Treffen in interessanten Lokationen Göttingens. FOSSGIS steht auch für Netzwerken. Dies ist schon am Dienstagabend möglich. Die Geochicas laden zu einem Treffen ein. Außerdem findet der Inoffizielle Start mit einem gemeinsamen Abendessen (Selbstzahler) statt und heißt alle schon angereisten Konferenzteilnehmenden willkommen.
Alle Informationen finden sich unter https://www.fossgis-konferenz.de/2026/socialevents/
Herzlichen Dank an die Sponsoren der Konferenz. die durch Ihre Unterstützung maßgeblich zur Finanzierung der Veranstaltung beitragen. Werden auch Sie FOSSGIS-Sponsor. Wir freuen uns über weitere Unterstützung. Informationen finden Sie unter https://fossgis-konferenz.de/2026/#Sponsoring

Die FOSSGIS lebt vom ehrenamtlichen Engagement, zahlreiche Helfer:innen bringen sich ein und übernehmen unterschiedlichste Aufgaben vor und während der Konferenz. Herzlichen Dank dafür!
Es werden noch Helfende gesucht, insbesondere für Sessionleitung, Unterstützung im Hörsaal für die Vortragenden sowie beim Catering, siehe https://www.fossgis-konferenz.de/2026/helfen/.
Am Samstag, den 28.03.2026 werden OSM-Samstag und Community Sprint in den Räumen des Geographischen Instituts in der Goldschmidtstr. 3-5, 37073 Göttingen stattfinden. Die Gelegenheit ins Gespräch zu kommen oder beim Community Sprint sich einzubringen oder Know-How aufzubauen. Jede:r ist herzlich willkommen teilzunehmen, https://pretalx.com/fossgis2026/talk/VVYN7A/.
Informationen rund um die FOSSGIS finden sich unter dem Hashtag #FOSSGIS2026. Den Haschtag #FOSSGIS2026 nutzen wir für Informtionen in den Social Media, nutzen sie es auch, um die Social Media Aktivitäten zu verbinden.
Im FOSSGIS-Archiv finden Sie die Homepages der vergangenen Konferenzen, inkl. Programm und Videos https://fossgis-konferenz.de/liste.html.
Das FOSSGIS Team 2026 wünscht eine gute Anreise und freut sich auf eine spannende Konferenz in Göttingen