How to parse csv using DictReader?

Collapse
X
 
  • Time
  • Show
Clear All
new posts
  • Ben D
    New Member
    • Mar 2011
    • 2

    How to parse csv using DictReader?

    Im trying to create a visualisation from radiation data from Japan, from this website: http://www.sendung.de/japan-radiation-open-data/
    It is in csv format. I was told python is good for parsing csv.
    Im struggling to get to grips with how to use the csv module to parse specific rows of data. I tried using DictReader but Im really not getting anywhere, total noob.

    There are four column headers: station_id, datetime, sa, ra

    I need to get the data for every row with a specific station_id e.g. all the data for every instance of station_id: 1070000016.

    Thanks in advance for any help.
  • bvdet
    Recognized Expert Specialist
    • Oct 2006
    • 2851

    #2
    I think this will do what you are looking for:
    Code:
    import csv
    
    labels = ['station_id', 'datetime', 'sa', 'ra']
    fn = r'D:\SDS2_7.0\macro\Work In Progress\station_data_1h.csv'
    
    station = '1070000016'
    
    f = open(fn, 'rb')
    
    reader = csv.DictReader(f, labels, dialect='excel-tab')
    
    station_data = [(item["sa"],
                     item["ra"],
                     item["datetime"]) for item in reader if \
                    item['station_id'] == station]
    
    f.close()
    
    print "Data for station %s" % (station)
    print "\n".join(["%s\t%s\t%s" % (item) for item in station_data])
    Output:
    Code:
    >>> Data for station 1070000016
    -999	-888	2011-03-24 20:10:00
    -999	-888	2011-03-24 20:20:00
    -999	-888	2011-03-24 20:30:00
    -999	-888	2011-03-24 20:40:00
    -999	-999	2011-03-24 20:50:00
    >>>

    Comment

    • Ben D
      New Member
      • Mar 2011
      • 2

      #3
      That was surprisingly straight forward - thank you very much!

      Comment

      Working...