Dhis2 Developer Manual
Dhis2 Developer Manual
February 2021
Garantie: CE DOCUMENT EST FOURNI PAR LES AUTEURS ’’ EN L’ETAT ’’ ET TOUTE GARANTIE
EXPRESSE OU IMPLICITE, Y COMPRIS, MAIS SANS S’Y LIMITER, LES GARANTIES IMPLICITES DE
QUALITÉ MARCHANDE ET D’ADÉQUATION À UN USAGE PARTICULIER SONT DÉCLINEES. EN
AUCUN CAS, LES AUTEURS OU CONTRIBUTEURS NE PEUVENT ÊTRE TENUS RESPONSABLES DES
DOMMAGES DIRECTS, INDIRECTS, ACCESSOIRES, SPÉCIAUX, EXEMPLAIRES OU ACCESSOIRES (Y
COMPRIS, MAIS SANS S’Y LIMITER, L’ACHAT DE MARCHANDISES OU DE SERVICES SUBSTITUÉS;
PERTE D’UTILISATION, DE DONNÉES OU DE PROFITS; INTERRUPTION COMMERCIALE) TOUTEFOIS
CAUSÉE ET SUR TOUTE THÉORIE DE LA RESPONSABILITÉ, QU’IL SOIT DU CONTRAT, UNE
RESPONSABILITÉ STRICTE OU UN LAC (Y COMPRIS LA NÉGLIGENCE OU AUTREMENT) DÉCOULANT
DE TOUTE MANIÈRE DE L’UTILISATION DE CE MANUEL ET DES PRODUITS MENTIONNÉS DANS CE
DOCUMENT, MÊME SI MIS À JOUR, TELS DOMMAGES.
Licence: L’autorisation est donnée de copier, distribuer ou modifier ce document selon les
termes de la licence GNU de documentation libre, dans sa version 1.3 ou dans toute version
ultérieure publiée par la Free Software Foundation ; sans Section Invariante, sans Texte De
Première De Couverture, et sans Texte De Quatrième De Couverture. Une copie de cette licence
est incluse dans la section intitulée “Licence GNU de documentation libre”: [Link]
files/[Link]
2
Table des matières Destiné à être utilisé avec la version 2.34
3
Table des matières Destiné à être utilisé avec la version 2.34
4
Table des matières Destiné à être utilisé avec la version 2.34
5
Table des matières Destiné à être utilisé avec la version 2.34
6
Table des matières Destiné à être utilisé avec la version 2.34
7
Table des matières Destiné à être utilisé avec la version 2.34
8
1 Web API 1.1 Introduction
1 Web API
The Web API is a component which makes it possible for external systems to access and
manipulate data stored in an instance of DHIS2. More precisely, it provides a programmatic
interface to a wide range of exposed data and service methods for applications such as third-
party software clients, web portals and internal DHIS2 modules.
1.1 Introduction
The Web API adheres to many of the principles behind the REST architectural style. To mention
some few and important ones:
1. The fundamental building blocks are referred to as resources. A resource can be anything
exposed to the Web, from a document to a business process - anything a client might want
to interact with. The information aspects of a resource can be retrieved or exchanged
through resource representations. A representation is a view of a resource’s state at any
given time. For instance, the reportTable resource in DHIS2 represents a tabular report of
aggregated data for a certain set of parameters. This resource can be retrieved in a variety
of representation formats including HTML, PDF, and MS Excel.
2. All resources can be uniquely identified by a URI (also referred to as URL). All resources
have a default representation. You can indicate that you are interested in a specific
representation by supplying an Accept HTTP header, a file extension or a format query
parameter. So in order to retrieve the PDF representation of a report table you can supply
an Accept: application/pdf header or append .pdf or ?format=pdf to your request URL.
3. Interactions with the API requires the correct use of HTTP methods or verbs. This implies
that for a resource you must issue a GET request when you want to retrieve it, POST
request when you want to create one, PUT when you want to update it and DELETE when
you want to remove it. So if you want to retrieve the default representation of a report
table you can send a GET request to e.g. /reportTable/iu8j/hYgF6t, where the last part is
the report table identifier.
While all of this might sound complicated, the Web API is actually very simple to use. We will
proceed with a few practical examples in a minute.
1.2 Authentication
The DHIS2 Web API supports two protocols for authentication, Basic Authentication and OAuth 2.
You can verify and get information about the currently authenticated user by making a GET
request to the following URL:
/api/33/me
And more information about authorities (and if a user has a certain authority) by using the
endpoints:
9
1 Web API 1.2.1 Basic Authentication
/api/33/me/authorities
/api/33/me/authorities/ALL
The DHIS2 Web API supports Basic authentication. Basic authentication is a technique for clients
to send login credentials over HTTP to a web server. Technically speaking, the username is
appended with a colon and the password, Base64-encoded, prefixed Basic and supplied as the
value of the Authorization HTTP header. More formally that is:
Most network-aware development environments provide support for Basic authentication, such
as Apache HttpClient and Spring RestTemplate. An important note is that this authentication
scheme provides no security since the username and password are sent in plain text and can be
easily observed by an attacker. Using Basic is recommended only if the server is using SSL/TLS
(HTTPS) to encrypt communication with clients. Consider this a hard requirement in order to
provide secure interactions with the Web API.
DHIS2 supports two-factor authentication. This can be enabled per user. When enabled, users
will be asked to enter a 2FA code when logging in. You can read more about 2FA here.
1.2.3 OAuth2
DHIS2 supports the OAuth2 authentication protocol. OAuth2 is an open standard for
authorization which allows third-party clients to connect on behalf of a DHIS2 user and get a
reusable bearer token for subsequent requests to the Web API. DHIS2 does not support fine-
grained OAuth2 roles but rather provides applications access based on user roles of the DHIS2
user.
Each client for which you want to allow OAuth 2 authentication must be registered in DHIS2. To
add a new OAuth2 client go to Apps > Settings > OAuth2 Clients in the user interface,
click Add new and enter the desired client name and the grant types.
An OAuth2 client can be added through the Web API. As an example, we can send a payload like
this:
{
"name":
"OAuth2
Demo
Client",
"cid":
"demo",
"secret":
"1e6db50c-0fee-11e5-98d0-3c15c2c6caf6",
"grantTypes": ["password",
"refresh_token",
"authorization_code"],
"redirectUris":
["http://
[Link]"]
}
10
1 Web API 1.2.3 OAuth2
SERVER="https://
[Link]/
dev"
curl -X POST -H "Content-Type: application/json" -
d @[Link]
-u admin:district
"$SERVER/api/
oAuth2Clients"
We will use this client as the basis for our next grant type examples.
The simplest of all grant types is the password grant type. This grant type is similar to basic
authentication in the sense that it requires the client to collect the user’s username and
password. As an example we can use our demo server:
SERVER="https://
[Link]/
dev"
SECRET="1e6db50c-0fee-11e5-98d0-3c15c2c6caf6"
"expires_in":
43175,
"scope":
"ALL",
"access_token":
"07fc551c-806c-41a4-9a8c-10658bd15435",
"refresh_token":
"a4e4de45-4743-481d-9345-2cfe34732fcc",
"token_type":
"bearer"
}
For now, we will concentrate on the access_token, which is what we will use as our
authentication (bearer) token. As an example, we will get all data elements using our token:
SERVER="https://
[Link]/
dev"
curl -H "Authorization: Bearer 07fc551c-806c-41a4-9a8c-10658bd15435"
"$SERVER/api/33/[Link]"
In general the access tokens have limited validity. You can have a look at the expires_in
property of the response in the previous example to understand when a token expires. To get a
fresh access_token you can make another round trip to the server and use refresh_token
11
1 Web API 1.3 Error and info messages
which allows you to get an updated token without needing to ask for the user credentials one
more time.
SERVER="https://
[Link]/
dev"
SECRET="1e6db50c-0fee-11e5-98d0-3c15c2c6caf6"
REFRESH_TOKEN="a4e4de45-4743-481d-9345-2cfe34732fcc"
The response will be exactly the same as when you get a token to start with.
Authorized code grant type is the recommended approach if you don’t want to store the user
credentials externally. It allows DHIS2 to collect the username/password directly from the user
instead of the client collecting them and then authenticating on behalf of the user. Please be
aware that this approach uses the redirectUris part of the client payload.
Step 1: Visit the following URL using a web browser. If you have more than one redirect URIs, you
might want to add &redirect_uri=[Link] to the URL:
SERVER="https://
[Link]/
dev"
$SERVER/uaa/oauth/authorize?
client_id=demo&response_type=code
Step 2: After the user has successfully logged in and accepted your client access, it will redirect
back to your redirect uri like this:
[Link]
Step 3: This step is similar to what we did in the password grant type, using the given code, we
will now ask for an access token:
SERVER="https://
[Link]/
dev"
SECRET="1e6db50c-0fee-11e5-98d0-3c15c2c6caf6"
The Web API uses a consistent format for all error/warning and informational messages:
"httpStatus":
"Forbidden",
12
1 Web API 1.4 Date and period format
"message": "Vous n'avez pas la permission de
lire ce type d'objet.",
"httpStatusCode":
403,
"status":
"ERROR"
}
Here we can see from the message that the user tried to access a resource I did not have access
to. It uses the http status code 403, the http status message forbidden and a descriptive
message.
WebMessage properties
Name Description
httpStatus HTTP Status message for this response, see RFC 2616 (Section 10) for
more information.
httpStatusCode HTTP Status code for this response, see RFC 2616 (Section 10) for more
information.
status DHIS2 status, possible values are OK | WARNING | ERROR, where OK
means everything was successful, ERROR means that operation did not
complete and WARNING means the operation was partially successful, if
the message contains a response property, please look there for more
information.
message A user-friendly message telling whether the operation was a success or
not.
devMessage A more technical, developer-friendly message (not currently in use).
response Extension point for future extension to the WebMessage format. This will
be documented when it starts being used.
Throughout the Web API, we refer to dates and periods. The date format is:
yyyy-MM-dd
For instance, if you want to express March 20, 2014, you must use 2014-03-20.
The period format is described in the following table (also available on the API endpoint /api/
periodTypes)
Period format
13
1 Web API 1.4.1 Relative Periods
In some parts of the API, like for the analytics resource, you can utilize relative periods in addition
to fixed periods (defined above). The relative periods are relative to the current date and allow
e.g. for creating dynamic reports. The available relative period values are:
This section provides an explanation of the identifier scheme concept. Identifier schemes are
used to map metadata objects to other metadata during import, and to render metadata as part
of exports. Please note that not all schemes work for all API calls, and not all schemes can be
used for both input and output. This is outlined in the sections explaining the various Web APIs.
The full set of identifier scheme object types available are listed below, using the name of the
property to use in queries:
• idScheme
• dataElementIdScheme
• categoryOptionComboIdScheme
• orgUnitIdScheme
• programIdScheme
• programStageIdScheme
• trackedEntityIdScheme
• trackedEntityAttributeIdScheme
The general idScheme applies to all types of objects. It can be overridden by specific object types.
14
1 Web API 1.6 Browsing the Web API
The default scheme for all parameters is UID (stable DHIS2 identifiers). The supported identifier
schemes are described in the table below.
Scheme Values
Scheme Description
ID, UID Match on DHIS2 stable Identifier, this is the default id scheme.
CODE Match on DHIS2 Code, mainly used to exchange data with an external
system.
NAME Match on DHIS2 Name, please note that this uses what is available as
[Link], and not the translated name. Also note that names are not
always unique, and in that case, they can not be used.
ATTRIBUTE:ID Match on metadata attribute, this attribute needs to be assigned to the
type you are matching on, and also that the unique property is set to true.
The main usage of this is also to exchange data with external systems, it
has some advantages over CODE since multiple attributes can be added,
so it can be used to synchronize with more than one system.
Note that identifier schemes is not an independent feature but needs to be used in combination
with resources such as data value import and metadata import.
As an example, to specify CODE as the general id scheme and override with UID for organisation
unit id scheme you can use these query parameters:
?idScheme=CODE&orgUnitIdScheme=UID
As another example, to specify an attribute for the organisation unit id scheme, code for the data
element id scheme and use the default UID id scheme for all other objects you can use these
parameters:
?orgUnitIdScheme=ATTRIBUTE:j38fk2dKFsG&dataElementIdScheme=CODE
The entry point for browsing the Web API is /api. This resource provides links to all available
resources. Four resource representation formats are consistently available for all resources:
HTML, XML, JSON, and JSONP. Some resources will have other formats available, like MS Excel,
PDF, CSV, and PNG. To explore the API from a web browser, navigate to the /api entry point and
follow the links to your desired resource, for instance /api/dataElements. For all resources
which return a list of elements certain query parameters can be used to modify the response:
Query parameters
Option Default
Param Description
values option
paging true | false true Indicates whether to return lists of
elements in pages.
page number 1 Defines which page number to
return.
15
1 Web API 1.6.1 Translation
Option Default
Param Description
values option
pageSizenumber 50 Defines the number of elements to
return for each page.
order property:asc/ Order the output using a specified
iasc/desc/ order, only properties that are both
idesc persisted and simple (no collections,
idObjects etc) are supported. iasc and
idesc are case insensitive sorting.
An example of how these parameters can be used to get a full list of data element groups in XML
response format is:
/api/[Link]?links=false&paging=false
You can query for elements on the name property instead of returning a full list of elements
using the query query variable. In this example we query for all data elements with the word
“anaemia” in the name:
/api/dataElements?query=anaemia
You can get specific pages and page sizes of objects like this:
/api/[Link]?page=2&pageSize=20
/api/[Link]?paging=false
/api/[Link]?order=shortName:desc
You can find an object based on its ID across all object types through the identifiableObjects
resource:
/api/identifiableObjects/<id>
1.6.1 Translation
DHIS2 supports translations of database content, such as data elements, indicators, and
programs. All metadata objects in the Web API have properties meant to be used for display / UI
purposes, which include displayName, displayShortName and displayDescription.
Translate options
16
1 Web API 1.6.2 Translation API
The translations for an object is rendered as part of the object itself in the translations array.
Note that the translations array in the JSON/XML payloads is normally pre-filtered for you, which
means they can not directly be used to import/export translations (as that would normally
overwrite locales other than current users).
{
"id":
"FTRrcoaog83",
"displayName":
"Accute
French",
"translations":
[
{
"property":
"SHORT_NAME",
"locale":
"fr",
"value":
"Accute
French"
},
{
"property":
"NAME",
"locale":
"fr",
"value":
"Accute
French"
}
]
}
{
"id":
"FTRrcoaog83",
"displayName": "Accute Flaccid
Paralysis (Deaths < 5 yrs)",
"translations":
[
17
1 Web API 1.6.2 Translation API
"property":
"FORM_NAME",
"locale":
"en_FK",
"value":
"aa"
},
{
"property":
"SHORT_NAME",
"locale":
"en_GB",
"value":
"Accute
Flaccid
Paral"
},
{
"property":
"SHORT_NAME",
"locale":
"fr",
"value":
"Accute
French"
},
{
"property":
"NAME",
"locale":
"fr",
"value":
"Accute
French"
},
{
"property":
"NAME",
"locale":
"en_FK",
"value":
"aa"
},
{
"property":
"DESCRIPTION",
"locale":
"en_FK",
"value":
"aa"
}
]
}
18
1 Web API 1.6.3 Web API versions
Note that even if you get the unfiltered result, and are using the appropriate type endpoint i..e /
api/dataElements we do not allow updates, as it would be too easy to make mistakes and
overwrite the other available locales.
To read and update translations you can use the special translations endpoint for each object
resource. These can be accessed by GET or PUT on the appropriate /api/<object-type>/
<object-id>/translations endpoint.
As an example, for a data element with identifier FTRrcoaog83, you could use /api/
dataElements/FTRrcoaog83/translations to get and update translations. The fields
available are property with options NAME, SHORT_NAME, DESCRIPTION, locale which
supports any valid locale ID and the translated property value.
"property":
"NAME",
"locale":
"fr",
"value": "Paralysie Flasque
Aiguë (Décès <5 ans)"
}
This payload would then be added to a translation array, and sent back to the appropriate
endpoint:
"translations":
[
{
"property":
"NAME",
"locale":
"fr",
"value": "Paralysie Flasque
Aiguë (Décès <5 ans)"
}
]
}
For a data element with ID FTRrcoaog83 you can PUT this to /api/dataElements/
FTRrcoaog83/translations. Make sure to send all translations for the specific object and not
just for a single locale (if not you will potentially overwrite existing locales for other locales).
The Web API is versioned starting from DHIS 2.25. The API versioning follows the DHIS2 major
version numbering. As an example, the API version for DHIS 2.33 is 33.
You can access a specific API version by including the version number after the /api component,
as an example like this:
/api/33/dataElements
19
1 Web API 1.7 Metadata object filter
If you omit the version part of the URL, the system will use the current API version. As an
example, for DHIS 2.25, when omitting the API part, the system will use API version 25. When
developing API clients it is recommended to use explicit API versions (rather than omitting the API
version), as this will protect the client from unforeseen API changes.
The last three API versions will be supported. As an example, DHIS version 2.27 will support API
version 27, 26 and 25.
Note that the metadata model is not versioned and that you might experience changes e.g. in
associations between objects. These changes will be documented in the DHIS2 major version
release notes.
To filter the metadata there are several filter operations that can be applied to the returned list
of metadata. The format of the filter itself is straight-forward and follows the pattern
property:operator:value, where property is the property on the metadata you want to filter on,
operator is the comparison operator you want to perform and value is the value to check against
(not all operators require value). Please see the schema section to discover which properties are
available. Recursive filtering, ie. filtering on associated objects or collection of objects, is
supported as well.
Available Operators
Value
Operator Types Description
required
eq string | boolean | integer | float | true Equality
enum | collection (checks for size)
| date
!eq string | boolean | integer | float | true Inequality
enum | collection (checks for size)
| date
ne string | boolean | integer | float | true Inequality
enum | collection (checks for size)
| date
like string true Case sensitive string,
match anywhere
!like string true Case sensitive string, not
match anywhere
$like string true Case sensitive string,
match start
!$like string true Case sensitive string, not
match start
like$ string true Case sensitive string,
match end
!like$ string true Case sensitive string, not
match end
ilike string true Case insensitive string,
match anywhere
!ilike string true Case insensitive string,
not match anywhere
20
1 Web API 1.7 Metadata object filter
Value
Operator Types Description
required
$ilike string true Case insensitive string,
match start
!$ilike string true Case insensitive string,
not match start
ilike$ string true Case insensitive string,
match end
!ilike$ string true Case insensitive string,
not match end
gt string | boolean | integer | float | true Greater than
collection (checks for size) | date
ge string | boolean | integer | float | true Greater than or equal
collection (checks for size) | date
lt string | boolean | integer | float | true Less than
collection (checks for size) | date
le string | boolean | integer | float | true Less than or equal
collection (checks for size) | date
null all false Property is null
!null all false Property is not null
empty collection false Collection is empty
token string true Match on multiple
tokens in search
property
!token string true Not match on multiple
tokens in search
property
in string | boolean | integer | float | true Find objects matching 1
date or more values
!in string | boolean | integer | float | true Find objects not
date matching 1 or more
values
Operators will be applied as logical and query, if you need a or query, you can have a look at our
in filter (also have a look at the section below). The filtering mechanism allows for recursion. See
below for some examples.
/api/dataElements?filter=id:eq:ID1&filter=id:eq:ID2
Get all data elements which have the dataSet with id ID1:
/api/dataElements?filter=[Link]:eq:ID1
Get all data elements with aggregation operator “sum” and value type “int”:
21
1 Web API 1.7.1 Logical operators
/api/[Link]?filter=aggregationOperator:eq:sum&filter=type:eq:int
You can do filtering within collections, e.g. to get data elements which are members of the “ANC”
data element group you can use the following query using the id property of the associated data
element groups:
/api/[Link]?filter=[Link]:eq:qfxEYY9xAl6
Since all operators are and by default, you can’t find a data element matching more than one id,
for that purpose you can use the in operator.
/api/[Link]?filter=id:in:[fbfJHSPpUQD,cYeuwXTCPkU]
As mentioned in the section before, the default logical operator applied to the filters is AND
which means that all object filters must be matched. There are however cases where you want to
match on one of several filters (maybe id and code field) and in those cases, it is possible to
switch the root logical operator from AND to OR using the rootJunction parameter.
Example: Normal filtering where both id and code must match to have a result returned
/api/[Link]?filter=id:in:[id1,id2]&filter=code:eq:code1
Example: Filtering where the logical operator has been switched to OR and now only one of the
filters must match to have a result returned
/api/[Link]?filter=id:in:[id1,id2]&filter=code:eq:code1&rootJunction=OR
In addition to the specific property based filtering mentioned above, we also have token based
AND filtering across a set of properties: id, code, and name (also shortName if available). These
properties are commonly referred to as identifiable. The idea is to filter metadata whose id,
name, code or short name containing something.
Example: Filter all data elements containing 2nd in any of the following: id,name,code,
shortName
/api/[Link]?filter=identifiable:token:2nd
Example: Get all data elements where ANC visit is found in any of the identifiable properties. The
system returns all data elements where both tokens (ANC and visit) are found anywhere in
identifiable properties.
/api/[Link]?filter=identifiable:token:ANC visit
22
1 Web API 1.7.3 Capture Scope filter
It is also possible to combine the identifiable filter with property-based filter and expect the
rootJunction to be applied.
/api/[Link]?filter=identifiable:token:ANC visit&filter=displayName:ilike:tt1
/api/[Link]?filter=identifiable:token:ANC visit
&filter=displayName:ilike:tt1&rootJunction=OR
In addition to the filtering mentioned above, we have a special filtering query parameter named
restrictToCaptureScope. If restrictToCaptureScope is set to true, only those metadata objects that
are either unassigned to any organisation units or those that are assigned explicitly to the logged
in users capture scope org units will be returned in the response. In addition to filtering the
metadata object lists, an additional filtering of the associated organisation units to only include
the capture scoped organisation units will be done. This filtering parameter can be used for
Program and CategoryOption listing APIs.
This feature is generally beneficial to reduce the payload size if there are large number of
organisation units associated to various metadata objects.
Some examples
/api/[Link]?restrictToCaptureScope=true&fields=*
/api/[Link]?restrictToCaptureScope=true&fields=*
All existing filters will work in addition to the capture scope filter.
/api/[Link]?restrictToCaptureScope=true&fields=*&filter=displayName:ilike:11
In many situations, the default views of the metadata can be too verbose. A client might only
need a few fields from each object and want to remove unnecessary fields from the response. To
discover which fields are available for each object please see the schema section.
The format for include/exclude allows for infinite recursion. To filter at the “root” level you can
just use the name of the field, i.e. ?fields=id,name which would only display the id and name
fields for every object. For objects that are either collections or complex objects with properties
on their own, you can use the format ?fields=id,name,dataSets[id,name] which would
return id, name of the root, and the id and name of every data set on that object. Negation can
be done with the exclamation operator, and we have a set of presets of field select. Both XML
and JSON are supported.
/api/indicators?fields=id,name
Example: Get id and name from dataElements, and id and name from the dataSets on
dataElements:
23
1 Web API 1.8 Metadata field filter
/api/dataElements?fields=id,name,dataSets[id,name]
To exclude a field from the output you can use the exclamation ! operator. This is allowed
anywhere in the query and will simply not include that property as it might have been inserted in
some of the presets.
A few presets (selected fields groups) are available and can be applied using the : operator.
Property operators
Operator Description
<field-name> Include property with name, if it exists.
<object>[<field- Includes a field within either a collection (will be applied to every
name>, …] object in that collection), or just on a single object.
!<field-name>, Do not include this field name, it also works inside objects/
<object>[!<field- collections. Useful when you use a preset to include fields.
name>
*, <object>[*] Include all fields on a certain object, if applied to a collection, it will
include all fields on all objects on that collection.
:<preset> Alias to select multiple fields. Three presets are currently available,
see the table below for descriptions.
Field presets
Preset Description
all All fields of the object
* Alias for all
identifiable Includes id, name, code, created and lastUpdated fields
nameable Includes id, name, shortName, code, description, created and
lastUpdated fields
persisted Returns all persisted property on an object, does not take into
consideration if the object is the owner of the relation.
owner Returns all persisted property on an object where the object is the
owner of all properties, this payload can be used to update
through the API.
/api/dataSets?fields=:all,!organisationUnits
Example: Include only id, name and the collection of organisation units from a data set, but
exclude the id from organisation units:
/api/dataSets/BfMAe6Itzgt?fields=id,name,organisationUnits[:all,!id]
24
1 Web API 1.8.1 Field transformers
/api/[Link]?fields=:nameable
In DHIS2.17 we introduced field transformers, the idea is to allow further customization of the
properties on the server-side.
/api/dataElements/ID?fields=id~rename(i),name~rename(n)
/api/[Link]?
fields=id,displayName,dataElements~isNotEmpty~rename(haveDataElements)
Available Transformers
[Link] Examples
/api/dataElements?fields=dataSets~size
/api/dataElements?fields=dataSets~isEmpty
/api/dataElements?fields=dataSets~isNotEmpty
/api/dataElements/ID?fields=id~rename(i),name~rename(n)
/api/dataElementGroups?fields=id,displayName,dataElements~paging(1;20)
25
1 Web API 1.9 Metadata create, read, update, delete, validate
# Include array with names of organisation units (collection only returns field name):
/api/[Link]?fields=id,organisationUnits~pluck[name]
All metadata entities in DHIS2 have their own API endpoint which supports CRUD operations
(create, read, update and delete). The endpoint URLs follows this format:
/api/<entityName>
The entityName uses the camel-case notation. As an example, the endpoint for data elements is:
/api/dataElements
The following request query parameters are available across all metadata endpoints.
26
1 Web API 1.9.2 Creating and updating objects
For creating new objects you will need to know the endpoint, the type format, and make sure
that you have the required authorities. As an example, we will create and update a constant. To
figure out the format, we can use the new schema endpoint for getting format description. So we
will start with getting that info:
[Link]
From the output, you can see that the required authorities for create are F_CONSTANT_ADD, and
the important properties are: name and value. From this, we can create a JSON payload and save
it as a file called [Link]:
"name":
"PI",
"value":
"3.14159265359"
}
<constant name="PI"
xmlns="http://
[Link]/schema/
dxf/2.0">
<value>3.14159265359</value>
</
constant>
We are now ready to create the new constant by sending a POST request to the
constantsendpoint with the JSON payload using curl:
27
1 Web API 1.9.3 Deleting objects
"status":
"SUCCESS",
"importCount":
{
"imported":
1,
"updated":
0,
"ignored":
0,
"deleted":
0
},
"type":
"Constant"
}
The process will be exactly the same for updating, you make your changes to the JSON/XML
payload, find out the ID of the constant, and then send a PUT request to the endpoint including
ID:
Deleting objects is very straight forward, you will need to know the ID and the endpoint of the
type you want to delete, let’s continue our example from the last section and use a constant. Let’s
assume that the id is abc123, then all you need to do is the send the DELETE request to the
endpoint + id:
In order to add or remove objects to or from a collection of objects you can use the following
pattern:
28
1 Web API 1.9.4 Adding and removing objects in collections
/api/{collection-object}/{collection-object-id}/{collection-name}/{object-id}
You should use the POST method to add, and the DELETE method to remove an object. When
there is a many-to-many relationship between objects, you must first determine which object
owns the relationship. If it isn’t clear which object this is, try the call both ways to see which
works.
• collection object: The type of objects that owns the collection you want to modify.
• collection object id: The identifier of the object that owns the collection you want to
modify.
• object id: The identifier of the object you want to add or remove from the collection.
As an example, in order to remove a data element with identifier IDB from a data element group
with identifier IDA you can do a DELETE request:
DELETE /api/dataElementGroups/IDA/dataElements/IDB
To add a category option with identifier IDB to a category with identifier IDA you can do a POST
request:
POST /api/categories/IDA/categoryOptions/IDB
You can add or remove multiple objects from a collection in one request with a payload like this:
"identifiableObjects":
[
{
"id":
"IDA"
},
{
"id":
"IDB"
},
{
"id":
"IDC"
}
]
}
Adding Items:
29
1 Web API 1.9.4 Adding and removing objects in collections
POST /api/categories/IDA/categoryOptions
Replacing Items:
PUT /api/categories/IDA/categoryOptions
Delete Items:
DELETE /api/categories/IDA/categoryOptions
You can both add and remove objects from a collection in a single POST request to the following
URL:
POST /api/categories/IDA/categoryOptions
"additions":
[
{
"id":
"IDA"
},
{
"id":
"IDB"
},
{
"id":
"IDC"
}
],
"deletions":
[
{
"id":
"IDD"
},
{
"id":
"IDE"
},
{
"id":
"IDF"
}
]
}
30
1 Web API 1.9.5 Validating payloads
DHIS 2 supports system wide validation of metadata payloads, which means that create and
update operations on the API endpoints will be checked for valid payload before allowing
changes to be made. To find out what validations are in place for a specific endpoint, have a look
at the /api/schemas endpoint, i.e. to figure out which constraints a data element have, you
would go to /api/schemas/dataElement.
You can also validate your payload manually by sending it to the proper schema endpoint. If you
wanted to validate the constant from the create section before, you would send it like this:
POST /api/schemas/constant
[
{
"message": "Required
property
missing.",
"property":
"type"
},
{
"property":
"aggregationOperator",
"message": "Required
property
missing."
},
{
"property":
"domainType",
"message": "Required
property
missing."
},
{
"property":
"shortName",
"message": "Required
property
missing."
}
]
For cases where you don’t want or need to update all properties on a object (which means
downloading a potentially huge payload, change one property, then upload again) we now
support partial update, for one or more properties.
31
1 Web API 1.10 Metadata export
The payload for doing partial updates are the same as when you are doing a full update, the only
difference is that you only include the properties you want to update, i.e.:
"name":
"Updated
Name",
"zeroIsSignificant":
true
}
This section explains the metatada API which is available at /api/metadata. XML and JSON
resource representations are supported.
/api/metadata
The most common parameters are described below in the “Export Parameter” table. You can also
apply this to all available types by using type:fields=<filter> and type:filter=<filter>.
You can also enable/disable the export of certain types by setting type=true|false.
Export Parameter
32
1 Web API 1.10.1 Metadata export examples
Export all metadata. Be careful as the response might be very large depending on your metadata
configuration:
/api/metadata
/api/metadata?defaultOrder=lastUpdated:desc
/api/metadata?indicators=true&indicatorGroups=true
/api/metadata?dataElements:fields=id,name&dataElements:order=displayName:desc
Export data elements and indicators where name starts with “ANC”:
/api/metadata?filter=name:^like:ANC&dataElements=true&indicators=true
When you want to exchange metadata for a data set, program or category combo from one
DHIS2 instance to another instance there are three dedicated endpoints available:
/api/dataSets/{id}/[Link]
/api/programs/{id}/[Link]
/api/categoryCombos/{id}/[Link]
/api/dashboards/{id}/[Link]
Export Parameter
33
1 Web API 1.11 Metadata import
This section explains the metadata import API. XML and JSON resource representations are
supported. Metadata can be imported using a POST request.
/api/metadata
The importer allows you to import metadata payloads which may include many different entities
and any number of objects per entity. The metadata export generated by the metadata export
API can be imported directly.
The metadata import endpoint support a variety of parameters, which are listed below.
Import Parameter
Options (first is
Name Description
default)
importMode COMMIT, VALIDATE Sets overall import mode, decides
whether or not to only VALIDATE or also
COMMIT the metadata, this has similar
functionality as our old dryRun flag.
identifier UID, CODE, AUTO Sets the identifier scheme to use for
reference matching. AUTO means try UID
first, then CODE.
importReportMode ERRORS, FULL, DEBUG Sets the ImportReport mode, controls
how much is reported back after the
import is done. ERRORS only includes
ObjectReports for object which has
errors. FULL returns an ObjectReport for
all objects imported, and DEBUG returns
the same plus a name for the object (if
available).
preheatMode REFERENCE, ALL, Sets the preheater mode, used to signal if
NONE preheating should be done for ALL (as it
was before with preheatCache=true) or
do a more intelligent scan of the objects
to see what to preheat (now the default),
setting this to NONE is not recommended.
importStrategy CREATE_AND_UPDATE, Sets import strategy,
CREATE, UPDATE, CREATE_AND_UPDATE will try and match
DELETE on identifier, if it doesn’t exist, it will
create the object.
34
1 Web API 1.11 Metadata import
Options (first is
Name Description
default)
atomicMode ALL, NONE Sets atomic mode, in the old importer we
always did a best effort import, which
means that even if some references did
not exist, we would still import
(i.e. missing data elements on a data
element group import). Default for new
importer is to not allow this, and similar
reject any validation errors. Setting the
NONE mode emulated the old behavior.
mergeMode REPLACE, MERGE Sets the merge mode, when doing
updates we have two ways of merging the
old object with the new one, MERGE mode
will only overwrite the old property if the
new one is not-null, for REPLACE mode all
properties are overwritten regardless of
null or not.
flushMode AUTO, OBJECT Sets the flush mode, which controls when
to flush the internal cache. It is strongly
recommended to keep this to AUTO
(which is the default). Only use OBJECT
for debugging purposes, where you are
seeing hibernate exceptions and want to
pinpoint the exact place where the stack
happens (hibernate will only throw when
flushing, so it can be hard to know which
object had issues).
skipSharing false, true Skip sharing properties, does not merge
sharing when doing updates, and does
not add user group access when creating
new objects.
skipValidation false, true Skip validation for import. NOT
RECOMMENDED.
async false, true Asynchronous import, returns
immediately with a Location header
pointing to the location of the
importReport. The payload also contains
a json object of the job created.
inclusionStrategy NON_NULL, ALWAYS, NON_NULL includes properties which are
NON_EMPTY not null, ALWAYS include all properties,
NON_EMPTY includes non empty
properties (will not include strings of 0
length, collections of size 0, etc.)
userOverrideMode NONE, CURRENT, Allows you to override the user property
SELECTED of every object you are importing, the
options are NONE (do nothing), CURRENT
(use import user), SELECTED (select a
specific user using overrideUser=X)
35
1 Web API 1.11 Metadata import
Options (first is
Name Description
default)
overrideUser User ID If userOverrideMode is SELECTED, use
this parameter to select the user you
want override with.
An example of a metadata payload to be imported looks like this. Note how each entity type have
their own property with an array of objects:
"dataElements":
[
{
"name": "EPI -
IPV 3 doses
given",
"shortName": "EPI -
IPV 3 doses
given",
"aggregationType":
"SUM",
"domainType":
"AGGREGATE",
"valueType":
"INTEGER_ZERO_OR_POSITIVE"
},
{
"name": "EPI -
IPV 4 doses
given",
"shortName": "EPI -
IPV 4 doses
given",
"aggregationType":
"SUM",
"domainType":
"AGGREGATE",
"valueType":
"INTEGER_ZERO_OR_POSITIVE"
}
],
"indicators":
[
{
"name": "EPI
- ADS stock
used",
"shortName":
"ADS
stock
used",
"numerator": "#{LTb8XeeqeqI}
+#{Fs28ZQJET6V}-#{A3mHIZd2tPg}",
"numeratorDescription":
"ADS 0.05 ml used",
"denominator":
"1",
"denominatorDescription":
"1",
36
1 Web API 1.11 Metadata import
"annualized":
false,
"indicatorType":
{
"id":
"kHy61PbChXr"
}
}
]
}
When posting this payload to the metadata endpoint, the response will contain information
about the parameters used during the import and a summary per entity type including how
many objects were created, updated, deleted and ignored:
"importParams":
{
"userOverrideMode":
"NONE",
"importMode":
"COMMIT",
"identifier":
"UID",
"preheatMode":
"REFERENCE",
"importStrategy":
"CREATE_AND_UPDATE",
"atomicMode":
"ALL",
"mergeMode":
"REPLACE",
"flushMode":
"AUTO",
"skipSharing":
false,
"skipTranslation":
false,
"skipValidation":
false,
"metadataSyncImport":
false,
"firstRowIsHeader":
true,
"username":
"UNICEF_admin"
},
"status":
"OK",
"typeReports":
[
{
"klass":
"[Link]",
37
1 Web API 1.12 Metadata audit
"stats":
{
"created":
2,
"updated":
0,
"deleted":
0,
"ignored":
0,
"total":
2
}
},
{
"klass":
"[Link]",
"stats":
{
"created":
1,
"updated":
0,
"deleted":
0,
"ignored":
0,
"total":
1
}
}
],
"stats":
{
"created":
3,
"updated":
0,
"deleted":
0,
"ignored":
0,
"total":
3
}
}
If you need information about who created, edited, or deleted DHIS2 metadata objects you can
enable metadata audit. There are two configuration options ([Link]) you can enable to
support this:
38
1 Web API 1.12.1 Metadata audit query
[Link] = on
This enables additional log output in your servlet container (e.g. tomcat [Link]) which
contains full information about the object created, object edited, or object deleted including full
JSON payload, date of audit event, and the user who did the action.
[Link] = on
This enables persisted audits, i.e. audits saved to the database. The information stored is the
same as with audit log; however this information is now placed into the metadataaudit table in
the database.
We do not recommended enabling these options on a empty database if you intend to bootstrap
your system, as it slows down the import and the audit might not be that useful.
If you have enabled persisted metadata audits on your DHIS2 instance, you can access metadata
audits at the following endpoint:
/api/33/metadataAudits
Some metadata types have a property named renderType. The render type property is a map
between a device and a renderingType. Applications can use this information as a hint on how
the object should be rendered on a specific device. For example, a mobile device might want to
render a data element differently than a desktop computer.
39
1 Web API 1.13 Render type (Experimental)
1. MOBILE
2. DESKTOP
The following table lists the metadata and rendering types available. The value type rendering
has addition constraints based on the metadata configuration, which will be shown in a second
table.
• MATRIX
• VERTICAL_RADIOBUTTONS
• HORIZONTAL_RADIOBUTTONS
• VERTICAL_CHECKBOXES
• HORIZONTAL_CHECKBOXES
• SHARED_HEADER_RADIOBUTTONS
• ICONS_AS_BUTTONS
• SPINNER
• ICON
• TOGGLE
• VALUE
• SLIDER
• LINEAR_SCALE
Since handling the default rendering of data elements and tracked entity attributes are
depending on the value type of the object, there is also a DEFAULT type to tell the client it should
be handled as normal. Program Stage Section is LISTING as default.
40
1 Web API 1.13 Render type (Experimental)
Is object an
Value type RenderingTypes allowed
optionset?
TRUE_ONLY No DEFAULT,
VERTICAL_RADIOBUTTONS,
HORIZONTAL_RADIOBUTTONS,
VERTICAL_CHECKBOXES,
HORIZONTAL_CHECKBOXES,
TOGGLE
BOOLEAN No
- Yes DEFAULT, DROPDOWN,
VERTICAL_RADIOBUTTONS,
HORIZONTAL_RADIOBUTTONS,
VERTICAL_CHECKBOXES,
HORIZONTAL_CHECKBOXES,
SHARED_HEADER_RADIOBUTTONS,
ICONS_AS_BUTTONS, SPINNER,
ICON
INTEGER No DEFAULT, VALUE, SLIDER,
LINEAR_SCALE, SPINNER
INTEGER_POSITIVE No
INTEGER_NEGATIVE No
INTEGER_ZERO_OR_POSITIVE No
NUMBER No
UNIT_INTERVAL No
PERCENTAGE No
A complete reference of the previous table can also be retrieved using the following endpoint:
GET /api/staticConfiguration/renderingOptions
Value type rendering also has some additional properties that can be set, which is usually needed
when rendering some of the specific types:
41
1 Web API 1.14 Object Style (Experimental)
The renderingType can be set when creating or updating the metadata listed in the first table. An
example payload for the rendering type for program stage section looks like this:
"renderingType":
{
"type":
"MATRIX"
}
}
"renderingType":
{
"type":
"SLIDER",
"min":
0,
"max":
1000,
"step":
50,
"decimalPoints":
0
}
}
Most metadata have a property names “style”. This property can be used by clients to represent
the object in a certain way. The properties currently supported by style is as follows:
Style properties
42
1 Web API 1.15 ActiveMQ Artemis / AMQP 1.0 integration
Currently, there is no official list or support for icon-libraries, so this is currently up to the client
to provide. The following list shows all objects that support style:
• Data element
• Data set
• Indicator
• Option
• Programme
• Program Indicator
• Program Section
• Étape du programme
• Relationship (Tracker)
When creating or updating any of these objects, you can include the following payload to change
the style:
"style":
{
"color":
"#ffffff",
"icon":
"my-
beautiful-
icon"
}
}
By default DHIS2 will start up an embedded instance of ActiveMQ Artemis when the instance is
booting up. For most use-cases, you do not need to configure anything to make use of this, but if
your infrastructure have an existing AMQP 1.0 compliant service you want to use, you can change
the defaults in your [Link] file using the keys in the table down below.
43
1 Web API 1.16 CSV metadata import
DHIS2 supports import of metadata in the CSV format, such as data elements, organisation units
and validation rules. Properties for the various metadata objects are identified based on the
column order/column index (see below for details). You can omit non-required object properties/
columns, but since the column order is significant, an empty column must be included. In other
words, if you would like to specify properties/columns which appear late in the column order but
not specify certain columns which appear early in the order you can include empty/blank
columns for them.
The first row of the CSV file is considered to be a header and is ignored during import. The
comma character should be used as a text delimiter. Text which contains commas must be
enclosed in double quotes.
To upload metadata in CSV format you can make a POST request to the metadata endpoint:
POST /api/metadata?classKey=CLASS-KEY
The following object types are supported. The classKey query parameter is mandatory and can
be found next to each object type in the table below.
44
1 Web API 1.16.1 Eléments de données
Tip
If using curl, the --data-binary option should be used as it preserves
line breaks and newlines, which is essential for CSV data.
As an example, to upload a file of data elements in CSV format with curl you can use the
following command:
The formats for the currently supported object types for CSV import are listed in the following
sections.
45
1 Web API 1.16.2 Les unités d’organisation
An example of a CSV file for data elements can be seen below. The first row will always be
ignored. Note how you can skip columns and rely on default values to be used by the system.
You can also skip columns which you do not use which appear to the right of the ones
name,uid,code,shortname,description
"Women participated skill development training",,"D0001","Women participated in training"
"Women participated community organizations",,"D0002","Women participated in organizations"
46
1 Web API 1.16.3 Règles de validation
Value (default
Index Column Required Description
first)
1 Name Yes Name. Max 230 characters.
Unique.
2 UID No UID Stable identifier. Max 11 char.
Will be generated by system if
not specified.
3 Code No Stable code. Max 50 char.
4 Parent No UID UID of parent organisation unit.
5 Short No 50 first char of Will fall back to first 50
name name characters of name if
unspecified. Max 50 characters.
Unique.
6 Description No Free text description.
7 Opening No 1970-01-01 Opening date of organisation
date unit in YYYY-MM-DD format.
8 Closed No Closed date of organisation unit
date in YYYY-MM-DD format, skip if
currently open.
9 Comment No Free text comment for
organisation unit.
10 Feature No NONE | Geospatial feature type.
type MULTI_POLYGON
| POLYGON |
POINT | SYMBOL
11 Coordinates No Coordinates used for geospatial
analysis in Geo JSON format.
12 URL No URL to organisation unit
resource. Max 255 char.
13 Contact No Contact person for organisation
person unit. Max 255 char.
14 Address No Address for organisation unit.
Max 255 char.
15 Email No Email for organisation unit. Max
150 char.
16 Phone No Phone number for organisation
number unit. Max 150 char.
A minimal example for importing organisation units with a parent unit looks like this:
name,uid,code,parent
"West province",,"WESTP","ImspTQPwCqd"
"East province",,"EASTP","ImspTQPwCqd"
47
1 Web API 1.16.3 Règles de validation
48
1 Web API 1.16.4 Option sets
Value
Index Column Required (default Description
first)
1 OptionSetName Yes Name. Max 230 characters. Unique.
Should be repeated for each option.
2 OptionSetUID No UID Stable identifier. Max 11 char. Will
be generated by system if not
specified. Should be repeated for
each option.
3 OptionSetCode No Stable code. Max 50 char. Should be
repeated for each option.
4 OptionName Yes Option name. Max 230 characters.
5 OptionUID No UID Stable identifier. Max 11 char. Will
be generated by system if not
specified.
6 OptionCode Yes Stable code. Max 50 char.
The format for option sets is special. The three first values represent an option set. The three last
values represent an option. The first three values representing the option set should be repeated
for each option.
optionsetname,optionsetuid,optionsetcode,optionname,optionuid,optioncode
"Color",,"COLOR","Blue",,"BLUE"
"Color",,"COLOR","Green",,"GREEN"
"Color",,"COLOR","Yellow",,"YELLOW"
"Sex",,,"Male",,"MALE"
"Sex",,,"Female",,"FEMALE"
"Sex",,,"Unknown",,"UNKNOWN"
"Result",,,"High",,"HIGH"
"Result",,,"Medium",,"MEDIUM"
"Result",,,"Low",,"LOW"
"Impact","cJ82jd8sd32","IMPACT","Great",,"GREAT"
"Impact","cJ82jd8sd32","IMPACT","Medium",,"MEDIUM"
"Impact","cJ82jd8sd32","IMPACT","Poor",,"POOR"
49
1 Web API 1.16.5 Option group
Value
Index Column Required (default Description
first)
1 OptionGroupName Yes Name.
Max 230
characters.
Unique.
Should be
repeated
for each
option.
2 OptionGroupUid No Stable
identifier.
Max 11
char. Will
be
generated
by system
if not
specified.
Should be
repeated
for each
option.
3 OptionGroupCode No Stable
code. Max
50 char.
Should be
repeated
for each
option.
4 OptionGroupShortName Yes Short
Name.
Max 50
characters.
Unique.
Should be
repeated
for each
option.
5 OptionSetUid Yes Stable
identifier.
Max 11
char.
Should be
repeated
for each
option.
6 OptionUid No Stable
identifier.
Max 11
char.
50
1 Web API 1.16.6 Option Group Set
Value
Index Column Required (default Description
first)
7 OptionCode No Stable
code. Max
50 char.
optionGroupName,optionGroupUid,optionGroupCode,optionGroupShortName,optionSetUid,optionUid,optionCode
optionGroupA,,,groupA,xmRubJIhmaK,,OptionA
optionGroupA,,,groupA,xmRubJIhmaK,,OptionB
optionGroupB,,,groupB,QYDAByFgTr1,,OptionC
Value
Index Column Required (default Description
first)
1 OptionGroupSetName Yes Name. Max
230
characters.
Unique.
Should be
repeated
for each
option.
2 OptionGroupSetUid No Stable
identifier.
Max 11
char. Will
be
generated
by system if
not
specified.
Should be
repeated
for each
option.
3 OptionGroupSetCode No Stable
code. Max
50 char.
Should be
repeated
for each
option.
51
1 Web API 1.16.7 Collection membership
Value
Index Column Required (default Description
first)
4 OptionGroupSetDescription No Description.
Should be
repeated
for each
option.
5 DataDimension No TRUE,
FALSE
6 OptionSetUid No OptionSet
UID. Stable
identifier.
Max 11
char.
name,uid,code,description,datadimension,optionsetuid
optiongroupsetA,,,,,xmRubJIhmaK
optiongroupsetB,,,,false,QYDAByFgTr1
In addition to importing objects, you can also choose to only import the group-member
relationship between an object and a group. Currently, the following group and object pairs are
supported
Value (default
Index Column Required Description
first)
1 UID Yes UID The UID of the
collection to add
an object to
2 UID Yes UID The UID of the
object to add to
the collection
52
1 Web API 1.16.8 Other objects
Data Element Group, Category Option, Category Option Group, Organisation Unit Group CSV
Format
Value
Index Column Required (default Description
first)
1 Name Yes Name. Max 230 characters. Unique.
2 UID No UID Stable identifier. Max 11 chars. Will be
generated by system if not specified.
3 Code No Stable code. Max 50 char.
4 Short No Short name. Max 50 characters.
name
name,uid,code,shortname
"Male",,"MALE"
"Female",,"FEMALE"
The deleted objects resource provides a log of metadata objects being deleted.
/api/deletedObjects
Whenever an object of type metadata is deleted, a log is being kept of the uid, code, the type and
the time of when it was deleted. This API is available at /api/deletedObjects field filtering and
object filtering works similarly to other metadata resources.
GET /api/[Link]?klass=DataElement
Get deleted object of type indicator which was deleted in 2015 and forward:
GET /api/[Link]?klass=Indicator&deletedAt=2015-01-01
1.18 Favorites
Certain types of metadata objects can be marked as favorites for the currently logged in user.
This applies currently for dashboards.
/api/dashboards/<uid>/favorite
To make a dashboard a favorite you can make a POST request (no content type required) to a
URL like this:
53
1 Web API 1.19 Subscriptions
/api/dashboards/iMnYyBfSxmM/favorite
To remove a dashboard as a favorite you can make a DELETE request using the same URL as
above.
The favorite status will appear as a boolean favorite field on the object (e.g. the dashboard) in the
metadata response.
1.19 Subscriptions
A logged user can subscribe to certain types of objects. Currently subscribable objects are those
of type Chart, EventChart, EventReport, Map, ReportTable and Visualization.
Note
The Chart and ReportTable objects are deprecated. Use Visualization
instead.
To get the subscribers of an object (return an array of user IDs) you can make a GET request:
/api/<object-type>/<object-id>/subscribers
/api/charts/DkPKc1EUmC2/subscribers
To check whether the current user is subscribed to an object (returns a boolean) you can perform
a GET call:
/api/<object-type>/<object-id>/subscribed
/api/charts/DkPKc1EUmC2/subscribed
/api/<object-type>/<object-id>/subscriber
File resources are objects used to represent and store binary content. The FileResource object
itself contains the file meta-data (name, Content-Type, size, etc.) as well as a key allowing retrieval
of the contents from a database-external file store. The FileResource object is stored in the
database like any other but the content (file) is stored elsewhere and is retrievable using the
contained reference (storageKey).
54
1 Web API 1.20 File resources
/api/fileResources
The contents of file resources are not directly accessible but are referenced from other objects
(such as data values) to store binary content of virtually unlimited size.
Creation of the file resource itself is done through the /api/fileResources endpoint as a
multipart upload POST-request:
curl "[Link]
fileResources" -X POST
-F "file=@/Path/to/file;filename=name-
[Link]"
The only form parameter required is the file which is the file to upload. The filename and
content-type should also be included in the request but will be replaced with defaults when not
supplied.
On successfully creating a file resource the returned data will contain a response field which in
turn contains the fileResource like this:
"httpStatus":
"Accepted",
"httpStatusCode":
202,
"status":
"OK",
"response":
{
"responseType":
"FileResource",
"fileResource":
{
"name":
"name-
of-
[Link]",
"created":
"2015-10-16T[Link].654+0000",
"lastUpdated":
"2015-10-16T[Link].667+0000",
"externalAccess":
false,
"publicAccess":
"--------",
"user":
{ ... },
"displayName":
"name-of-
[Link]",
"contentType":
"image/
png",
"contentLength":
512571,
"contentMd5":
"4e1fc1c3f999e5aa3228d531e4adde58",
55
1 Web API 1.20.1 File resource constraints
"storageStatus":
"PENDING",
"id":
"xm4JwRwke0i"
}
}
}
Note that the response is a 202 Accepted, indicating that the returned resource has been
submitted for background processing (persisting to the external file store in this case). Also, note
the storageStatus field which indicates whether the contents have been stored or not. At this
point, the persistence to the external store is not yet finished (it is likely being uploaded to a
cloud-based store somewhere) as seen by the PENDING status.
Even though the content has not been fully stored yet the file resource can now be used, for
example as referenced content in a data value (see Working with file data values). If we need to
check the updated storageStatus or otherwise retrieve the metadata of the file, the
fileResources endpoint can be queried.
curl "[Link] -H
"Accept: application/json"
This request will return the FileResource object as seen in the response of the above example.
• File resources must be referenced (assigned) from another object in order to be persisted
in the long term. A file resource which is created but not referenced by another object such
as a data value is considered to be in staging. Any file resources which are in this state and
are older than two hours will be marked for deletion and will eventually be purged from
the system.
• The ID returned by the initial creation of the file resource is not retrievable from any other
location unless the file resource has been referenced (in which the ID will be stored as the
reference), so losing it will require the POST request to be repeated and a new object to be
created. The orphaned file resource will be cleaned up automatically.
• File resource objects are immutable, meaning modification is not allowed and requires
creating a completely new resource instead.
This section explains the Metadata Versioning APIs available starting 2.24
• /api/metadata/version: This endpoint will return the current metadata version of the
system on which it is invoked.
Query Parameters
56
1 Web API 1.21.1 Get metadata version examples
Request:
/api/metadata/version
Response:
"name":
"Version_4",
"created":
"2016-06-30T[Link].684+0000",
"lastUpdated":
"2016-06-30T[Link].685+0000",
"externalAccess":
false,
"displayName":
"Version_4",
"type":
"BEST_EFFORT",
"hashCode":
"848bf6edbaf4faeb7d1a1169445357b0",
"id":
"Ayz2AEMB6ry"
}
Request:
/api/metadata/version?versionName=Version_2
Response:
"name":
"Version_2",
"created":
"2016-06-30T[Link].238+0000",
"lastUpdated":
"2016-06-30T[Link].239+0000",
"externalAccess":
false,
"displayName":
"Version_2",
"type":
"BEST_EFFORT",
"hashCode":
"8050fb1a604e29d5566675c86d02d10b",
57
1 Web API 1.21.2 Get the list of all metadata versions
"id":
"SaNyhusVxBG"
}
Query Parameters
Request:
/api/metadata/version/history
Response:
"metadataversions":
[
{
"name":
"Version_1",
"type":
"BEST_EFFORT",
"created":
"2016-06-30T[Link].139+0000",
"id":
"SjnhUp6r4hG",
"hashCode":
"fd1398ff7ec9fcfd5b59d523c8680798"
},
{
"name":
"Version_2",
"type":
"BEST_EFFORT",
"created":
"2016-06-30T[Link].238+0000",
"id":
"SaNyhusVxBG",
"hashCode":
"8050fb1a604e29d5566675c86d02d10b"
},
58
1 Web API 1.21.2 Get the list of all metadata versions
"name":
"Version_3",
"type":
"BEST_EFFORT",
"created":
"2016-06-30T[Link].680+0000",
"id":
"FVkGzSjAAYg",
"hashCode":
"70b779ea448b0da23d8ae0bd59af6333"
}
]
}
Example: Get the list of all versions in this system created after “Version_2”
Request:
/api/metadata/version/history?baseline=Version_2
Response:
"metadataversions":
[
{
"name":
"Version_3",
"type":
"BEST_EFFORT",
"created":
"2016-06-30T[Link].680+0000",
"id":
"FVkGzSjAAYg",
"hashCode":
"70b779ea448b0da23d8ae0bd59af6333"
},
{
"name":
"Version_4",
"type":
"BEST_EFFORT",
"created":
"2016-06-30T[Link].684+0000",
"id":
"Ayz2AEMB6ry",
"hashCode":
"848bf6edbaf4faeb7d1a1169445357b0"
}
]
}
• /api/metadata/version/create: This endpoint will create the metadata version for the
version type as specified in the parameter.
Query Parameters
59
1 Web API 1.21.3 Create metadata version
• BEST_EFFORT
• ATOMIC
Users can select the type of metadata which needs to be created. Metadata Version type governs
how the importer should treat the given version. This type will be used while importing the
metadata. There are two types of metadata.
• BEST_EFFORT: This type suggests that missing references can be ignored and the importer
can continue importing the metadata (e.g. missing data elements on a data element group
import).
• ATOMIC: This type ensures a strict type checking of the metadata references and the
metadata import will fail if any of the references do not exist.
Note
It’s recommended to have an ATOMIC type of versions to ensure that all
systems (central and local) have the same metadata. Any missing
reference is caught in the validation phase itself. Please see the
importer details for a full explanation.
Request:
Response:
"name":
"Version_1",
"created":
"2016-06-30T[Link].139+0000",
"lastUpdated":
"2016-06-30T[Link].333+0000",
"externalAccess":
false,
"publicAccess":
"--------",
"user":
{
"name":
"John
Traore",
"created":
"2013-04-18T[Link].407+0000",
60
1 Web API 1.21.4 Download version metadata
"lastUpdated":
"2016-04-06T[Link].571+0000",
"externalAccess":
false,
"displayName":
"John
Traore",
"id":
"xE7jOejl9FI"
},
"displayName":
"Version_1",
"type":
"BEST_EFFORT",
"hashCode":
"fd1398ff7ec9fcfd5b59d523c8680798",
"id":
"SjnhUp6r4hG"
}
Path parameters
Request:
Response:
{
"date":
"2016-06-30T[Link].120+0000",
"dataElements":
[
{
"code":
"ANC
5th
Visit",
61
1 Web API 1.22 Metadata Synchronization
"created":
"2016-06-30T[Link].870+0000",
"lastUpdated":
"2016-06-30T[Link].870+0000",
"name":
"ANC
5th
Visit",
"id":
"sCuZKDsix7Y",
"shortName":
"ANC 5th
Visit ",
"aggregationType":
"SUM",
"domainType":
"AGGREGATE",
"zeroIsSignificant":
false,
"valueType":
"NUMBER",
"categoryCombo":
{
"id":
"p0KPaWEg3cf"
},
"user":
{
"id":
"xE7jOejl9FI"
}
}
]
}
This section explains the Metadata Synchronization API available starting 2.24
Query parameters
• This API should be used with utmost care. Please note that there is an alternate way to
achieve sync in a completely automated manner by leveraging the Metadata Sync Task
62
1 Web API 1.22.1 Sync metadata version
from the “Data Administration” app. See Chapter 22, Section 22.17 of User Manual for
more details regarding Metadata Sync Task.
• This sync API can alternatively be used to sync metadata for the versions which have failed
from the metadata sync scheduler. Due to its dependence on the given metadata version
number, care should be taken for the order in which this gets invoked. E.g. If this api is
used to sync some higher version from the central instance, then the sync might fail as the
metadata dependencies are not present in the local instance.
• Assume the local instance is at Version_12 and if this endpoint is used to sync
Version_15 (of type BEST_EFFORT) from the central instance, the scheduler will start
syncing metadata from Version_16. So the local instance will not have the metadata
versions between Version_12 and Version_15. You need to manually sync the missing
versions using these endpoints only.
Request:
/api/33/dataValueSets
A common use-case for system integration is the need to send a set of data values from a third-
party system into DHIS. In this example, we will use the DHIS2 demo on http://
[Link]/demo as basis. We assume that we have collected case-based data using a
simple software client running on mobile phones for the Mortality <5 years data set in the
community of Ngelehun CHC (in Badjia chiefdom, Bo district) for the month of January 2014. We
have now aggregated our data into a statistical report and want to send that data to the DHIS2
instance. The base URL to the demo API is [Link] The following
links are relative to the base URL.
The resource which is most appropriate for our purpose of sending data values is the /api/
dataValueSets resource. A data value set represents a set of data values which have a
relationship, usually from being captured off the same data entry form. The format looks like
this:
<dataValueSet xmlns="http://
[Link]/schema/dxf/
2.0" dataSet="dataSetID"
completeDate="date"
period="period"
orgUnit="orgUnitID"
attributeOptionCombo="aocID">
<dataValue
dataElement="dataElementID"
categoryOptionCombo="cocID"
value="1"
comment="comment1"/>
63
1 Web API 1.23.1 Sending data values
<dataValue
dataElement="dataElementID"
categoryOptionCombo="cocID"
value="2"
comment="comment2"/>
<dataValue
dataElement="dataElementID"
categoryOptionCombo="cocID"
value="3"
comment="comment3"/>
</
dataValueSet>
"dataSet":
"dataSetID",
"completeDate":
"date",
"period":
"period",
"orgUnit":
"orgUnitID",
"attributeOptionCombo":
"aocID",
"dataValues":
[
{
"dataElement":
"dataElementID",
"categoryOptionCombo":
"cocID",
"value":
"1",
"comment":
"comment1"
},
{
"dataElement":
"dataElementID",
"categoryOptionCombo":
"cocID",
"value":
"2",
"comment":
"comment2"
},
{
"dataElement":
"dataElementID",
"categoryOptionCombo":
"cocID",
"value":
"3",
64
1 Web API 1.23.1 Sending data values
"comment":
"comment3"
}
]
}
"dataelement","period","orgunit","catoptcombo","attroptcombo","value","strby","lstupd","cmt"
"dataElementID","period","orgUnitID","cocID","aocID","1","username","2015-04-01","comment1"
"dataElementID","period","orgUnitID","cocID","aocID","2","username","2015-04-01","comment2"
"dataElementID","period","orgUnitID","cocID","aocID","3","username","2015-04-01","comment3"
Note
Please refer to the date and period section above for time formats.
From the example, we can see that we need to identify the period, the data set, the org unit
(facility) and the data elements for which to report.
To obtain the identifier for the data set we make a request to the /api/dataSets resource.
From there we find and follow the link to the Mortality < 5 years data set which leads us to /api/
dataSets/pBOMPrpg1QX. The resource representation for the Mortality < 5 years data set
conveniently advertises links to the data elements which are members of it. From here we can
follow these links and obtain the identifiers of the data elements. For brevity we will only report
on three data elements: Measles with id f7n9E0hX8qk, Dysentery with id Ix2HsbDMLea and
Cholera with id eY5ehpbEsB7.
What remains is to get hold of the identifier of the organisation unit. The dataSet representation
conveniently provides a link to organisation units which report on it so we search for Ngelehun
CHC and follow the link to the HTML representation at /api/organisationUnits/
DiszpKrYNg8, which tells us that the identifier of this org unit is DiszpKrYNg8.
From our case-based data, we assume that we have 12 cases of measles, 14 cases of dysentery
and 16 cases of cholera. We have now gathered enough information to be able to put together
the XML data value set message:
<dataValueSet xmlns="http://
[Link]/schema/dxf/2.0"
dataSet="pBOMPrpg1QX"
completeDate="2014-02-03"
period="201401"
orgUnit="DiszpKrYNg8">
<dataValue
dataElement="f7n9E0hX8qk"
value="12"/>
<dataValue
dataElement="Ix2HsbDMLea"
value="14"/>
<dataValue
dataElement="eY5ehpbEsB7"
value="16"/>
</
dataValueSet>
In JSON format:
65
1 Web API 1.23.1 Sending data values
"dataSet":
"pBOMPrpg1QX",
"completeDate":
"2014-02-03",
"period":
"201401",
"orgUnit":
"DiszpKrYNg8",
"dataValues":
[
{
"dataElement":
"f7n9E0hX8qk",
"value":
"1"
},
{
"dataElement":
"Ix2HsbDMLea",
"value":
"2"
},
{
"dataElement":
"eY5ehpbEsB7",
"value":
"3"
}
]
}
To perform functional testing we will use the curl tool which provides an easy way of transferring
data using HTTP. First, we save the data value set XML content in a file called
[Link]. From the directory where this file resides we invoke the following from the
command line:
For sending JSON content you must set the content-type header accordingly:
The command will dispatch a request to the demo Web API, set application/xml as the
content-type and authenticate using admin/district as username/password. If all goes well
this will return a 200 OK HTTP status code. You can verify that the data has been received by
opening the data entry module in DHIS2 and select the org unit, data set and period used in this
example.
66
1 Web API 1.23.2 Sending bulks of data values
The API follows normal semantics for error handling and HTTP status codes. If you supply an
invalid username or password, 401 Unauthorized is returned. If you supply a content-type
other than application/xml, 415 Unsupported Media Type is returned. If the XML content
is invalid according to the DXF namespace, 400 Bad Request is returned. If you provide an
invalid identifier in the XML content, 409 Conflict is returned together with a descriptive
message.
The previous example showed us how to send a set of related data values sharing the same
period and organisation unit. This example will show us how to send large bulks of data values
which don’t necessarily are logically related.
Again we will interact with the /api/dataValueSets resource. This time we will not specify the
dataSet and completeDate attributes. Also, we will specify the period and orgUnit attributes
on the individual data value elements instead of on the outer data value set element. This will
enable us to send data values for various periods and organisation units:
<dataValueSet xmlns="http://
[Link]/schema/dxf/
2.0">
<dataValue
dataElement="f7n9E0hX8qk"
period="201401"
orgUnit="DiszpKrYNg8"
value="12"/>
<dataValue
dataElement="f7n9E0hX8qk"
period="201401"
orgUnit="FNnj3jKGS7i"
value="14"/>
<dataValue
dataElement="f7n9E0hX8qk"
period="201402"
orgUnit="DiszpKrYNg8"
value="16"/>
<dataValue
dataElement="f7n9E0hX8qk"
period="201402"
orgUnit="Jkhdsf8sdf4"
value="18"/>
</
dataValueSet>
In JSON format:
"dataValues":
[
{
"dataElement":
"f7n9E0hX8qk",
"period":
"201401",
"orgUnit":
"DiszpKrYNg8",
"value":
"12"
67
1 Web API 1.23.2 Sending bulks of data values
},
{
"dataElement":
"f7n9E0hX8qk",
"period":
"201401",
"orgUnit":
"FNnj3jKGS7i",
"value":
"14"
},
{
"dataElement":
"f7n9E0hX8qk",
"period":
"201402",
"orgUnit":
"DiszpKrYNg8",
"value":
"16"
},
{
"dataElement":
"f7n9E0hX8qk",
"period":
"201402",
"orgUnit":
"Jkhdsf8sdf4",
"value":
"18"
}
]
}
In CSV format:
"dataelement","period","orgunit","categoryoptioncombo","attributeoptioncombo","value"
"f7n9E0hX8qk","201401","DiszpKrYNg8","bRowv6yZOF2","bRowv6yZOF2","1"
"Ix2HsbDMLea","201401","DiszpKrYNg8","bRowv6yZOF2","bRowv6yZOF2","2"
"eY5ehpbEsB7","201401","DiszpKrYNg8","bRowv6yZOF2","bRowv6yZOF2","3"
Note that when using CSV format you must use the binary data option to preserve the line-
breaks in the CSV file:
68
1 Web API 1.23.3 Import parameters
The data value set resource provides an XML response which is useful when you want to verify
the impact your request had. The first time we send the data value set request above the server
will respond with the following import summary:
<importSummary>
<dataValueCount
imported="2"
updated="1"
ignored="1"/>
<dataSetComplete>false</dataSetComplete>
</
importSummary>
This message tells us that 3 data values were imported, 1 data value was updated while zero data
values were ignored. The single update comes as a result of us sending that data value in the
previous example. A data value will be ignored if it references a non-existing data element,
period, org unit or data set. In our case, this single ignored value was caused by the last data
value having an invalid reference to org unit. The data set complete element will display the date
of which the data value set was completed, or false if no data element attribute was supplied.
Import parameters
69
1 Web API 1.23.3 Import parameters
All parameters are optional and can be supplied as query parameters in the request URL like this:
/api/33/dataValueSets?dataElementIdScheme=code&orgUnitIdScheme=name
&dryRun=true&importStrategy=CREATE
They can also be supplied as XML attributes on the data value set element like below. XML
attributes will override query string parameters.
70
1 Web API 1.23.3 Import parameters
<dataValueSet xmlns="[Link]
schema/dxf/2.0"
dataElementIdScheme="code"
orgUnitIdScheme="name"
dryRun="true"
importStrategy="CREATE">
</
dataValueSet>
Note that the preheatCache parameter can have a huge impact on performance. For small
import files, leaving it to false will be fast. For large import files which contain a large number of
distinct data elements and organisation units, setting it to true will be orders of magnitude faster.
Data value import supports a set of value types. For each value type, there is a special
requirement. The following table lists the edge cases for value types.
Value
Requirements Comment
type
BOOLEAN true | True | TRUE | Used when the value is a boolean, true or false
false | False | FALSE | 1 value. The import service does not care if the
|0|t|f| input begins with an uppercase or lowercase
letter, or if it’s all uppercase.
Regarding the id schemes, by default the identifiers used in the XML messages use the DHIS2
stable object identifiers referred to as UID. In certain interoperability situations we might
experience that an external system decides the identifiers of the objects. In that case we can use
the code property of the organisation units and other objects to set fixed identifiers. When
importing data values we hence need to reference the code property instead of the identifier
property of these metadata objects. Identifier schemes can be specified in the XML message as
well as in the request as query parameters. To specify it in the XML payload you can do this:
<dataValueSet xmlns="http://
[Link]/schema/dxf/
2.0"
dataElementIdScheme="CODE"
orgUnitIdScheme="UID"
idScheme="CODE">
</
dataValueSet>
The parameter table above explains how the id schemes can be specified as query parameters.
The following rules apply for what takes precedence:
• Id schemes defined in the XML or JSON payload take precedence over id schemes defined
as URL query parameters.
• The default id scheme is UID, which will be used if no explicit id scheme is defined.
71
1 Web API 1.23.4 CSV data value format
• uid (default)
• code
• name
The attribute option is special and refers to meta-data attributes which have been marked as
unique. When using this option, attribute must be immediately followed by the identifier of
the attribute, e.g. “attribute:DnrLSdo4hMl”.
Data values can be sent and imported in an asynchronous fashion by supplying an async query
parameter set to true:
/api/33/dataValueSets?async=true
This will initiate an asynchronous import job for which you can monitor the status at the task
summaries API. The API response indicates the unique identifier of the job, type of job and the
URL you can use to monitor the import job status. The response will look similar to this:
"httpStatus":
"OK",
"httpStatusCode":
200,
"status":
"OK",
"message":
"Initiated
dataValueImport",
"response":
{
"name":
"dataValueImport",
"id":
"YR1UxOUXmzT",
"created":
"2018-08-20T[Link].429",
"jobType":
"DATAVALUE_IMPORT",
"relativeNotifierEndpoint": "/api/system/tasks/
DATAVALUE_IMPORT/YR1UxOUXmzT"
}
}
Please read the section on asynchronous task status for more information.
The following section describes the CSV format used in DHIS2. The first row is assumed to be a
header row and will be ignored during import.
72
1 Web API 1.23.5 Generating data value set template
An example of a CSV file which can be imported into DHIS2 is seen below.
"dataelement","period","orgunit","catoptcombo","attroptcombo","value","storedby","timestamp"
"DUSpd8Jq3M7","201202","gP6hn503KUX","Prlt0C1RF0s",,"7","bombali","2010-04-17"
"DUSpd8Jq3M7","201202","gP6hn503KUX","V6L425pT3A0",,"10","bombali","2010-04-17"
"DUSpd8Jq3M7","201202","OjTS752GbZE","V6L425pT3A0",,"9","bombali","2010-04-06"
To generate a data value set template for a certain data set you can use the /api/dataSets/
<id>/dataValueSet resource. XML and JSON response formats are supported. Example:
/api/dataSets/BfMAe6Itzgt/[Link]
The parameters you can use to further adjust the output are described below:
73
1 Web API 1.23.6 Reading data values
This section explains how to retrieve data values from the Web API by interacting with the
dataValueSets resource. Data values can be retrieved in XML, JSON and CSV format. Since we
want to read data we will use the GET HTTP verb. We will also specify that we are interested in
the XML resource representation by including an Accept HTTP header with our request. The
following query parameters are required:
Parameter Description
dataSet Data set identifier. Can be repeated any number of
times.
dataElementGroup Data element group identifier. Can be repeated any
number of times.
period Period identifier in ISO format. Can be repeated any
number of times.
startDate Start date for the time span of the values to export.
endDate End date for the time span of the values to export.
orgUnit Organisation unit identifier. Can be repeated any
number of times.
children Whether to include the children in the hierarchy of the
organisation units.
orgUnitGroup Organisation unit group identifier. Can be repeated any
number of times.
attributeOptionCombo Attribute option combo identifier. Can be repeated any
number of times.
includeDeleted Whether to include deleted data values.
lastUpdated Include only data values which are updated since the
given time stamp.
lastUpdatedDuration Include only data values which are updated within the
given duration. The format is <value><time-unit>, where
the supported time units are “d” (days), “h” (hours), “m”
(minutes) and “s” (seconds).
limit The max number of results in the response.
idScheme Property of meta data objects to use for data values in
response.
dataElementIdScheme Property of the data element object to use for data
values in response.
orgUnitIdScheme Property of the org unit object to use for data values in
response.
categoryOptionComboIdScheme Property of the category option combo and attribute
option combo objects to use for data values in
response.
dataSetIdScheme Property of the data set object to use in the response.
74
1 Web API 1.23.6 Reading data values
• xml (application/xml)
• json (application/json)
• csv (application/csv)
• adx (application/adx+xml)
Assuming that we have posted data values to DHIS2 according to the previous section called
Sending data values we can now put together our request for a single data value set and request
it using cURL:
curl "[Link]
dataSet=pBOMPrpg1QX&period=201401&orgUnit=DiszpKrYNg8"
-H "Accept:application/xml" -u
admin:district
We can also use the start and end dates query parameters to request a larger bulk of data
values. I.e. you can also request data values for multiple data sets and org units and a time span
in order to export larger chunks of data. Note that the period query parameter takes precedence
over the start and end date parameters. An example looks like this:
curl "[Link]
dataSet=pBOMPrpg1QX&dataSet=BfMAe6Itzgt
&startDate=2013-01-01&endDate=2013-01-31&orgUnit=YuQRtpLP10I&orgUnit=vWbkYPRmKyS&children=true"
-H "Accept:application/xml" -u
admin:district
To retrieve data values which have been created or updated within the last 10 days you can make
a request like this:
/api/dataValueSets?dataSet=pBOMPrpg1QX&orgUnit=DiszpKrYNg8&lastUpdatedDuration=10d
<?xml version='1.0'
encoding='UTF-8'?>
<dataValueSet xmlns="http://
[Link]/schema/dxf/2.0"
dataSet="pBOMPrpg1QX"
completeDate="2014-01-02"
period="201401"
orgUnit="DiszpKrYNg8">
<dataValue
dataElement="eY5ehpbEsB7"
period="201401"
orgUnit="DiszpKrYNg8"
categoryOptionCombo="bRowv6yZOF2"
value="10003"/>
<dataValue
dataElement="Ix2HsbDMLea"
period="201401"
orgUnit="DiszpKrYNg8"
categoryOptionCombo="bRowv6yZOF2"
value="10002"/>
75
1 Web API 1.23.6 Reading data values
<dataValue
dataElement="f7n9E0hX8qk"
period="201401"
orgUnit="DiszpKrYNg8"
categoryOptionCombo="bRowv6yZOF2"
value="10001"/>
</
dataValueSet>
/api/[Link]?dataSet=pBOMPrpg1QX&period=201401&orgUnit=DiszpKrYNg8
"dataSet":
"pBOMPrpg1QX",
"completeDate":
"2014-02-03",
"period":
"201401",
"orgUnit":
"DiszpKrYNg8",
"dataValues":
[
{
"dataElement":
"eY5ehpbEsB7",
"categoryOptionCombo":
"bRowv6yZOF2",
"period":
"201401",
"orgUnit":
"DiszpKrYNg8",
"value":
"10003"
},
{
"dataElement":
"Ix2HsbDMLea",
"categoryOptionCombo":
"bRowv6yZOF2",
"period":
"201401",
"orgUnit":
"DiszpKrYNg8",
"value":
"10002"
},
{
"dataElement":
"f7n9E0hX8qk",
76
1 Web API 1.23.7 Sending, reading and deleting individual data values
"categoryOptionCombo":
"bRowv6yZOF2",
"period":
"201401",
"orgUnit":
"DiszpKrYNg8",
"value":
"10001"
}
]
}
Note that data values are softly deleted, i.e. a deleted value has the deleted property set to true
instead of being permanently deleted. This is useful when integrating multiple systems in order
to communicate deletions. You can include deleted values in the response like this:
/api/33/[Link]?dataSet=pBOMPrpg1QX&period=201401
&orgUnit=DiszpKrYNg8&includeDeleted=true
/api/33/[Link]?dataSet=pBOMPrpg1QX&period=201401
&orgUnit=DiszpKrYNg8
dataelement,period,orgunit,catoptcombo,attroptcombo,value,storedby,lastupdated,comment,flwup
f7n9E0hX8qk,201401,DiszpKrYNg8,bRowv6yZOF2,bRowv6yZOF2,12,system,
2015-04-05T[Link].000,comment1,false
Ix2HsbDMLea,201401,DiszpKrYNg8,bRowv6yZOF2,bRowv6yZOF2,14,system,
2015-04-05T[Link].000,comment2,false
eY5ehpbEsB7,201401,DiszpKrYNg8,bRowv6yZOF2,bRowv6yZOF2,16,system,
2015-04-05T[Link].000,comment3,false
FTRrcoaog83,201401,DiszpKrYNg8,bRowv6yZOF2,bRowv6yZOF2,12,system,
2014-03-02T[Link].519,comment4,false
• Either at least one period or a start date and end date must be specified.
• Organisation units must be within the hierarchy of the organisation units of the
authenticated user.
This example will show how to send individual data values to be saved in a request. This can be
achieved by sending a POST request to the dataValues resource:
77
1 Web API 1.23.7 Sending, reading and deleting individual data values
/api/dataValues
Query
Required Description
parameter
de Yes Data element identifier
pe Yes Period identifier
ou Yes Organisation unit identifier
co No Category option combo identifier, default will be used if
omitted
cc No (must be Attribute category combo identifier
combined
with cp)
cp No (must be Attribute category option identifiers, separated with ; for
combined multiple values
with cc)
ds No Data set, to check if POST or DELETE is allowed for period
and organisation unit. If specified, the data element must
be assigned to this data set. If not specified, a data set
containing the data element will be chosen to check if the
operation is allowed.
value No Data value. For boolean values, the following will be
accepted: true | True | TRUE | false | False | FALSE | 1 | 0
|t|f|
comment No Data comment
followUp No Follow up on data value, will toggle the current boolean
value
If any of the identifiers given are invalid, if the data value or comment is invalid or if the data is
locked, the response will contain the 409 Conflict status code and descriptive text message. If the
operation leads to a saved or updated value, 200 OK will be returned. An example of a request
looks like this:
curl "[Link]
de=s46m5MS0hxu
&pe=201301&ou=DiszpKrYNg8&co=Prlt0C1RF0s&value=12"
-X POST -u
admin:district
This resource also allows a special syntax for associating the value to an attribute option
combination. This can be done by sending the identifier of the attribute category combination,
together with the identifiers of the attribute category options which the value represents within
the combination. The category combination is specified with the cc parameter, while the
category options are specified as a semi-colon separated string with the cp parameter. It is
necessary to ensure that the category options are all part of the category combination. An
example looks like this:
78
1 Web API 1.24 ADX data format
curl "[Link]
de=s46m5MS0hxu&ou=DiszpKrYNg8
&pe=201308&cc=dzjKKQq0cSO&cp=wbrDrL2aYEc;btOyqprQ9e8&value=26"
-X POST -u
admin:district
You can retrieve a data value with a request using the GET method. The value, comment and
followUp params are not applicable in this regard:
curl "[Link]
de=s46m5MS0hxu
&pe=201301&ou=DiszpKrYNg8&co=Prlt0C1RF0s"
-u
admin:district
You can delete a data value with a request using the DELETE method.
When dealing with data values which have a data element of type file there is some deviation
from the method described above. These data values are special in that the contents of the value
is a UID reference to a FileResource object instead of a self-contained constant. These data values
will behave just like other data values which store text content, but should be handled differently
in order to produce meaningful input and output.
The process of storing one of these data values roughly goes like this:
1. Upload the file to the /api/fileResources endpoint as described in the file resource
section.
3. Store the retrieved id as the value to the data value using any of the methods described
above.
Only one-to-one relationships between data values and file resources are allowed. This is
enforced internally so that saving a file resource id in several data values is not allowed and will
return an error. Deleting the data value will delete the referenced file resource. Direct deletion of
file resources are not possible.
The data value can now be retrieved as any other but the returned data will be the UID of the file
resource. In order to retrieve the actual contents (meaning the file which is stored in the file
resource mapped to the data value) a GET request must be made to /api/dataValues/files
mirroring the query parameters as they would be for the data value itself. The /api/
dataValues/files endpoint only supports GET requests.
It is worth noting that due to the underlying storage mechanism working asynchronously the file
content might not be immediately ready for download from the /api/dataValues/files
endpoint. This is especially true for large files which might require time consuming uploads
happening in the background to an external file store (depending on the system configuration).
Retrieving the file resource meta-data from the /api/fileResources/<id> endpoint allows
checking the storageStatus of the content before attempting to download it.
From version 2.20 we have included support for an international standard for aggregate data
exchange called ADX. ADX is developed and maintained by the Quality Research and Public
79
1 Web API 1.24 ADX data format
Health committee of the IHE (Integrating the HealthCare Enterprise). The wiki page detailing
QRPH activity can be found at [Link]. ADX is still under active development and has now
been published for trial implementation. Note that what is implemented currently in DHIS2 is the
functionality to read and write adx formatted data, i.e. what is described as Content Consumer
and Content Producer actors in the ADX profile.
The structure of an ADX data message is quite similar to what you might already be familiar with
from DXF 2 data described earlier. There are a few important differences. We will describe these
differences with reference to a small example:
<adx xmlns="urn:ihe:qrph:adx:2015"
xmlns:xsi="[Link]
XMLSchema-instance"
xsi:schemaLocation="urn:ihe:qrph:adx:2015 ../
schema/adx_loose.xsd"
exported="2015-02-08T[Link]Z">
<group
orgUnit="OU_559"
period="2015-06-01/P1M"
completeDate="2015-07-01"
dataSet="(TB/
HIV)VCCT">
<dataValue
dataElement="VCCT_0"
GENDER="FMLE"
HIV_AGE="AGE0-14"
value="32"/>
<dataValue
dataElement="VCCT_1"
GENDER="FMLE"
HIV_AGE="AGE0-14"
value="20"/>
<dataValue
dataElement="VCCT_2"
GENDER="FMLE"
HIV_AGE="AGE0-14"
value="10"/>
<dataValue
dataElement="PLHIV_TB_0"
GENDER="FMLE"
HIV_AGE="AGE0-14"
value="10"/>
<dataValue
dataElement="PLHIV_TB_1"
GENDER="FMLE"
HIV_AGE="AGE0-14"
value="10"/>
<dataValue
dataElement="VCCT_0"
GENDER="MLE"
HIV_AGE="AGE0-14"
value="32"/>
<dataValue
dataElement="VCCT_1"
GENDER="MLE"
HIV_AGE="AGE0-14"
value="20"/>
<dataValue
dataElement="VCCT_2"
GENDER="MLE"
HIV_AGE="AGE0-14"
value="10"/>
80
1 Web API 1.24 ADX data format
<dataValue
dataElement="PLHIV_TB_0"
GENDER="MLE"
HIV_AGE="AGE0-14"
value="10"/>
<dataValue
dataElement="PLHIV_TB_1"
GENDER="MLE"
HIV_AGE="AGE0-14"
value="10"/>
<dataValue
dataElement="VCCT_0"
GENDER="FMLE"
HIV_AGE="AGE15-24"
value="32"/>
<dataValue
dataElement="VCCT_1"
GENDER="FMLE"
HIV_AGE="AGE15-24"
value="20"/>
<dataValue
dataElement="VCCT_2"
GENDER="FMLE"
HIV_AGE="AGE15-24"
value="10"/>
<dataValue
dataElement="PLHIV_TB_0"
GENDER="FMLE"
HIV_AGE="AGE15-24"
value="10"/>
<dataValue
dataElement="PLHIV_TB_1"
GENDER="FMLE"
HIV_AGE="AGE15-24"
value="10"/>
<dataValue
dataElement="VCCT_0"
GENDER="MLE"
HIV_AGE="AGE15-24"
value="32"/>
<dataValue
dataElement="VCCT_1"
GENDER="MLE"
HIV_AGE="AGE15-24"
value="20"/>
<dataValue
dataElement="VCCT_2"
GENDER="MLE"
HIV_AGE="AGE15-24"
value="10"/>
<dataValue
dataElement="PLHIV_TB_0"
GENDER="MLE"
HIV_AGE="AGE15-24"
value="10"/>
<dataValue
dataElement="PLHIV_TB_1"
GENDER="MLE"
HIV_AGE="AGE15-24"
value="10"/>
</
group>
</adx>
81
1 Web API 1.24.1 The adx root element
The adx root element has only one mandatory attribute, which is the exported timestamp. In
common with other adx elements, the schema is extensible in that it does not restrict additional
application specific attributes.
Unlike dxf2, adx requires that the datavalues are grouped according to orgUnit, period and
dataSet. The example above shows a data report for the “(TB/HIV) VCCT” dataset from the online
demo database. This example is using codes as identifiers instead of dhis2 uids. Codes are the
preferred form of identifier when using adx.
The orgUnit, period and dataSet attributes are mandatory in adx. The group element may contain
additional attributes. In our DHIS2 implementation any additional attributes are simply passed
through to the underlying importer. This means that all attributes which currently have meaning
in dxf2 (such as completeDate in the example above) can continue to be used in adx and they will
be processed in the same way.
A significant difference between adx and dxf2 is in the way that periods are encoded. Adx makes
strict use of ISO8601 and encodes the reporting period as (date|datetime)/(duration). So the
period in the example above is a period of 1 month (P1M) starting on 2015-06-01. So it is the data
for June 2015. The notation is a bit more verbose, but it is very flexible and allows us to support
all existing period types in DHIS2
DHIS2 supports a limited number of periods or durations during import. Periods should begin
with the date in which the duration begins, followed by a “/” and then the duration notation as
noted in the table. The following table details all of the ADX supported period types, along with
examples.
ADX Periods
82
1 Web API 1.24.4 Data values
The dataValue element in adx is very similar to its equivalent in DXF. The mandatory attributes
are dataElement and value. The orgUnit and period attributes don’t appear in the dataValue as
they are required at the group level.
The most significant difference is the way that disaggregation is represented. DXF uses the
categoryOptionCombo to indicate the disaggregation of data. In adx the disaggregations
(e.g. AGEGROUP and SEX) are expressed explicitly as attributes. One important constraint on
using adx is that the categories used for dataElements in the dataSet MUST have a code assigned
to them, and further, that code must be of a form which is suitable for use as an XML attribute.
The exact constraint on an XML attribute name is described in the W3C XML standard - in
practice, this means no spaces, no non-alphanumeric characters other than ’’ and it may not start
with a letter. The example above shows examples of ‘good’ category codes (‘GENDER’ and
‘HIV_AGE’).
This restriction on the form of codes applies only to categories. Currently, the convention is not
enforced by DHIS2 when you are assigning codes, but you will get an informative error message
if you try to import adx data and the category codes are either not assigned or not suitable.
The main benefits of using explicit dimensions of disaggregated data are that
• The system producing the data does not have to be synchronised with the
categoryOptionCombo within DHIS2.
• The producer and consumer can match their codes to a 3rd party authoritative source,
such as a vterminology service. Note that in the example above the Gender and AgeGroup
codes are using code lists from the WHO Global Health Observatory.
Note that this feature may be extremely useful, for example when producing disaggregated data
from an EMR system, but there may be cases where a categoryOptionCombo mapping is easier
or more desirable. The DHIS2 implementation of adx will check for the existence of a
categoryOptionCombo attribute and, if it exists, it will use that it preference to exploded
dimension attributes. Similarly, an attributeOptionCombo attribute on the group element will be
processed in the legacy way. Otherwise, the attributeOptionCombo can be treated as exploded
categories just as on the dataValue.
In the simple example above, each of the dataElements in the dataSet have the same
dimensionality (categorycombo) so the data is neatly rectangular. This need not be the case.
dataSets may contain dataElements with different categoryCombos, resulting in a ragged-right
adx data message.
DHIS2 exposes an endpoint for POST adx data at /api/dataValueSets using application/
xml+adx as content type. So, for example, the following curl command can be used to POST the
example data above to the DHIS2 demo server:
83
1 Web API 1.24.6 Exportation de données
Note the query parameters are the same as are used with DXF data. The adx endpoint should
interpret all the existing DXF parameters with the same semantics as DXF.
DHIS2 exposes an endpoint to GET adx data sets at /api/dataValueSets using application/
xml+adx as the accepted content type. So, for example, the following curl command can be used
to retrieve the adx data:
Note the query parameters are the same as are used with DXF data. An important difference is
that the identifiers for dataSet and orgUnit are assumed to be codes rather than uids.
This section is about sending and reading program rules, and explains the program rules data
model. The program rules give functionality to configure dynamic behaviour in the programs in
DHIS2.
The following table gives a detailed overview over the programRule model.
programRule
84
1 Web API 1.25.1 Program rule model
priority The priority to run the rule in cases where the order of the optional
rules matters. In most cases the rules does not depend on
being run before or after other rules, and in these cases
the priority can be omitted. If no priority is set, the rule
will be run after any rules that has a priority defined. If a
priority(integer) is set, the rule with the lowest priority will
be run before rules with higher priority.
The following table gives a detailed overview over the programRuleAction model.
programRuleAction
85
1 Web API 1.25.1 Program rule model
The following table gives a detailed overview over the programRuleVariable model.
programRuleVariable
87
1 Web API 1.25.1 Program rule model
#{myVariable} > 5
sourceType Defines how this variable is populated with data from the Compulsory
enrollment and events.
• DATAELEMENT_NEWEST_EVENT_PROGRAM_STAGE -
In tracker capture, gets the newest value that exists
for a data element, within the events of a given
program stage in the current enrollment. In event
capture, gets the newest value among the 10
newest events on the organisation unit.
• DATAELEMENT_NEWEST_EVENT_PROGRAM - In
tracker capture, get the newest value that exists for
a data element across the whole enrollment. In
event capture, gets the newest value among the 10
newest events on the organisation unit.
• DATAELEMENT_PREVIOUS_EVENT - In tracker
capture, gets the newest value that exists among
events in the program that precedes the current
event. In event capture, gets the newvest value
among the 10 preceeding events registered on the
organisation unit.
88
1 Web API 1.25.2 Creating program rules
• TODO Coming -
1.26 Forms
To retrieve information about a form (which corresponds to a data set and its sections) you can
interact with the form resource. The form response is accessible as XML and JSON and will
provide information about each section (group) in the form as well as each field in the sections,
including labels and identifiers. By supplying period and organisation unit identifiers the form
response will be populated with data values.
To retrieve the form for a data set you can do a GET request like this:
/api/dataSets/<dataset-id>/[Link]
To retrieve the form for the data set with identifier “BfMAe6Itzgt” in XML:
/api/dataSets/BfMAe6Itzgt/form
/api/dataSets/BfMAe6Itzgt/[Link]?metaData=true
To retrieve the form filled with data values for a specific period and organisation unit in XML:
/api/dataSets/BfMAe6Itzgt/[Link]?ou=DiszpKrYNg8&pe=201401
When it comes to custom data entry forms, this resource also allows for creating such forms
directly for a data set. This can be done through a POST or PUT request with content type text/
html where the payload is the custom form markup such as:
1.27 Documents
89
1 Web API 1.28 Validation
Document fields
/api/documents
"name":
"dhis
home",
"external":
true,
"url":
"https://
[Link]"
}
A GET request with the id of a document appended will return information about the document.
A PUT request to the same endpoint will update the fields of the document:
/api/documents/<documentId>
Appending /data to the GET request will return the actual file content of the document:
/api/documents/<documentId>/data
1.28 Validation
To generate a data validation summary you can interact with the validation resource. The dataSet
resource is optimized for data entry clients for validating a data set / form, and can be accessed
like this:
/api/33/validation/dataSet/[Link]?pe=201501&ou=DiszpKrYNg8
90
1 Web API 1.28 Validation
In addition to validate rules based on data set, there are two additional methods for performing
validation: Custom validation and Scheduled validation.
Custom validation can be initiated through the “Data Quality” app, where you can configure the
periods, validation rule groups and organisation units to be included in the analysis and if you
want to send out notifications for and/or persist the results found. The result of this analysis will
be a list of violations found using your criteria.
The first path variable is an identifier referring to the data set to validate. XML and JSON resource
representations are supported. The response contains violations of validation rules. This will be
extended with more validation types in the coming versions.
To retrieve validation rules which are relevant for a specific data set, meaning validation rules
with formulas where all data elements are part of the specific data set, you can make a GET
request to to validationRules resource like this:
/api/validationRules?dataSet=<dataset-id>
The validation rules have a left side and a right side, which is compared for validity according to
an operator. The valid operator values are found in the table below.
Operators
Value Description
equal_to Equal to
not_equal_to Not equal to
greater_than Greater than
greater_than_or_equal_to Greater than or equal to
less_than Less than
less_than_or_equal_to Less than or equal to
compulsory_pair If either side is present, the other must also be
exclusive_pair If either side is present, the other must not be
The left side and right side expressions are mathematical expressions which can contain
references to data elements and category option combinations on the following format:
${<dataelement-id>.<catoptcombo-id>}
The left side and right side expressions have a missing value strategy. This refers to how the
system should treat data values which are missing for data elements / category option
combination references in the formula in terms of whether the validation rule should be checked
for validity or skipped. The valid missing value strategies are found in the table below.
Value Description
SKIP_IF_ANY_VALUE_MISSING Skip validation rule if any data value is missing
SKIP_IF_ALL_VALUES_MISSING Skip validation rule if all data values are missing
91
1 Web API 1.29 Validation Results
Value Description
NEVER_SKIP Never skip validation rule irrespective of missing data
values
Validation results are persisted results of violations found during a validation analysis. If you
choose “persist results” when starting or scheduling a validation analysis, any violations found
will be stored in the database. When a result is stored in the database it will be used for 3 things:
2. Persisted results that have not generated a notification, will do so, once.
4. Skipping rules that have been already checked when running validation analysis.
This means if you don’t persist your results, you will be unable to generate analytics for validation
results, if checked, results will generate notifications every time it’s found and running validation
analysis might be slower.
/api/33/validationResults
You can also inspect an individual result using the validation result id in this endpoint:
/api/33/validationResults/<id>
Validation results are sent out to the appropriate users once every day, but can also be manually
triggered to run on demand using the following api endpoint:
/api/33/validation/sendNotifications
Several resources for performing data analysis and finding data quality and validation issues are
provided.
/api/dataAnalysis/validationRules
92
1 Web API 1.30.1 Analyse des règles de validation
Sample output:
[
{
"validationRuleId":
"kgh54Xb9LSE",
"validationRuleDescription":
"Malaria outbreak",
"organisationUnitId":
"DiszpKrYNg8",
"organisationUnitDisplayName":
"Ngelehun CHC",
"organisationUnitPath": "/ImspTQPwCqd/O6uvpzGd5pu/
YuQRtpLP10I/DiszpKrYNg8",
"organisationUnitAncestorNames": "Sierra
Leone / Bo / Badjia / ",
"periodId":
"201901",
"periodDisplayName":
"January
2019",
"importance":
"MEDIUM",
"leftSideValue":
10.0,
"operator":
">",
"rightSideValue":
14.0
},
{
"validationRuleId":
"ZoG4yXZi3c3",
"validationRuleDescription": "ANC 2 cannot be
higher than ANC 1",
"organisationUnitId":
"DiszpKrYNg8",
"organisationUnitDisplayName":
"Ngelehun CHC",
"organisationUnitPath": "/ImspTQPwCqd/O6uvpzGd5pu/
YuQRtpLP10I/DiszpKrYNg8",
"organisationUnitAncestorNames": "Sierra
Leone / Bo / Badjia / ",
93
1 Web API 1.30.2 Standard deviation based outlier analysis
"periodId":
"201901",
"periodDisplayName":
"January
2019",
"importance":
"MEDIUM",
"leftSideValue":
22.0,
"operator":
"<=",
"rightSideValue":
19.0
}
]
/api/dataAnalysis/stdDevOutlier
/api/dataAnalysis/minMaxOutlier
The supported query parameters are equal to the std dev based outlier analysis resource
described above.
/api/dataAnalysis/followup
94
1 Web API 1.31 Intégrité des données
The supported query parameters are equal to the std dev based outlier analysis resource
described above.
The data integrity capabilities of the data administration module are available through the web
API. This section describes how to run the data integrity process as well as retrieving the result.
The details of the analysis performed are described in the user manual.
The operation of measuring data integrity is a fairly resource (and time) demanding task. It is
therefore run as an asynchronous process and only when explicitly requested. Starting the task is
done by forming an empty POST request to the dataIntegrity endpoint like so (demonstrated in
curl syntax):
If successful the request will return HTTP 202 immediately. The location header of the response
points to the resource used to check the status of the request. The payload also contains a json
object of the job created. Forming a GET request to the given location yields an empty JSON
response if the task has not yet completed and a JSON taskSummary object when the task is
done. Polling (conservatively) to this resource can hence be used to wait for the task to finish.
Once data integrity is finished running the result can be fetched from the system/
taskSummaries resource like so:
curl "[Link]
DATAINTEGRITY"
The returned object contains a summary for each point of analysis, listing the names of the
relevant integrity violations. As stated in the leading paragraph for this section the details of the
analysis (and the resulting data) can be found in the user manual chapter on Data
Administration.
1.32 Indicateurs
To retrieve indicators you can make a GET request to the indicators resource like this:
/api/indicators
Indicators represent expressions which can be calculated and presented as a result. The indicator
expressions are split into a numerator and denominator. The numerators and denominators are
mathematical expressions which can contain references to data elements, other indicators,
constants and organisation unit groups. The variables will be substituted with data values when
used e.g. in reports. Variables which are allowed in expressions are described in the following
table.
95
1 Web API 1.32.1 Aggregate indicators
Indicator variables
96
1 Web API 1.32.2
Note that for data element variables the category option combo identifier can be omitted. The
variable will then represent the total for the data element, e.g. across all category option combos.
Example:
#{P3jJH5Tu5VC} + 2
Data element operands can include any of category option combination and attribute option
combination, and use wildcards to indicate any value:
R{BfMAe6Itzgt.REPORTING_RATE} * #{P3jJH5Tu5VC.S34ULMcHMca}
R{BfMAe6Itzgt.ACTUAL_REPORTS} / R{BfMAe6Itzgt.EXPECTED_REPORTS}
N{Rigf2d2Zbjp} * #{P3jJH5Tu5VC.S34ULMcHMca}
1.32.2
To retrieve program indicators you can make a GET request to the program indicators resource
like this:
/api/programIndicators
97
1 Web API 1.32.3 Expressions
Variable Description
#{<programstage- Refers to a combination of program stage and data element
id>.<dataelement-id>} id.
A{<attribute-id>} Refers to a tracked entity attribute.
V{<variable-id>} Refers to a program variable.
C{<constant-id>} Refers to a constant.
1.32.3 Expressions
Expressions are mathematical formulas which can contain references to data elements,
constants and organisation unit groups. To validate and get the textual description of an
expression, you can make a GET request to the expressions resource:
/api/expressions/description?expression=<expression-string>
The response follows the standard JSON web message format. The status property indicates the
outcome of the validation and will be “OK” if successful and “ERROR” if failed. The message
property will be “Valid” if successful and provide a textual description of the reason why the
validation failed if not. The description provides a textual description of the expression.
"httpStatus":
"OK",
"httpStatusCode":
200,
"status":
"OK",
"message":
"Valid",
"description": "Acute
Flaccid
Paralysis"
}
This section is about complete data set registrations for data sets. A registration marks as a data
set as completely captured.
98
1 Web API 1.33.1 Completing data sets
This section explains how to register data sets as complete. This is achieved by interacting with
the completeDataSetRegistrations resource:
/api/33/completeDataSetRegistrations
The endpoint supports the POST method for registering data set completions. The endpoint is
functionally very similar to the dataValueSets endpoint, with support for bulk import of complete
registrations.
Importing both XML and JSON formatted payloads are supported. The basic format of this
payload, given as XML in this example, is like so:
<completeDataSetRegistrations xmlns="http://
[Link]/schema/dxf/2.0">
<completeDataSetRegistration
period="200810"
dataSet="eZDhcZi6FLP"
organisationUnit="qhqAxPSTUXp"
attributeOptionCombo="bRowv6yZOF2"
storedBy="imported"/>
<completeDataSetRegistration
period="200811"
dataSet="eZDhcZi6FLP"
organisationUnit="qhqAxPSTUXp"
attributeOptionCombo="bRowv6yZOF2"
storedBy="imported"/>
</
completeDataSetRegistrations>
The storedBy attribute is optional (as it is a nullable property on the complete registration
object). You can also optionally set the date property (time of registration) as an attribute. It the
time is not set, the current time will be used.
99
1 Web API 1.33.2 Reading complete data set registrations
This section explains how to retrieve data set completeness registrations. We will be using the
completeDataSetRegistrations resource. The query parameters to use are these:
Parameter Description
dataSet Data set identifier, multiple data sets are allowed
period Period identifier in ISO format. Multiple periods are
allowed.
startDate Start date for the time span of the values to export
endDate End date for the time span of the values to export
created Include only registrations which were created since the
given timestamp
createdDuration Include only registrations which were created within the
given duration. The format is <value><time-unit>,
where the supported time units are “d”, “h”, “m”, “s”
(days, hours, minutes, seconds). The time unit is relative
to the current time.
orgUnit Organisation unit identifier, can be specified multiple
times. Not applicable if orgUnitGroup is given.
100
1 Web API 1.33.3 Un-completing data sets
Parameter Description
orgUnitGroup Organisation unit group identifier, can be specified
multiple times. Not applicable if orgUnit is given.
children Whether to include the children in the hierarchy of the
organisation units
limit The maximum number of registrations to include in the
response.
idScheme Identifier property used for meta data objects in the
response.
dataSetIdScheme Identifier property used for data sets in the response.
Overrides idScheme.
orgUnitIdScheme Identifier property used for organisation units in the
response. Overrides idScheme.
attributeOptionComboIdScheme Identifier property used for attribute option combos in
the response. Overrides idScheme.
The dataSet and orgUnit parameters can be repeated in order to include multiple data sets and
organisation units.
The period, start/end date, created and createdDuration parameters provide multiple ways to set
the time dimension for the request, thus only one can be used. For example, it doesn’t make
sense to both set the start/end date and to set the periods.
curl "[Link]
dataSet=pBOMPrpg1QX
&dataSet=pBOMPrpg1QX&startDate=2014-01-01&endDate=2014-01-31&orgUnit=YuQRtpLP10I
&orgUnit=vWbkYPRmKyS&children=true"
-H "Accept:application/xml" -u
admin:district
You can get the response in xml and json format. You can indicate which response format you
prefer through the Accept HTTP header like in the example above. For xml you use application/
xml; for json you use application/json.
This section explains how you can un-register the completeness of a data set. To un-complete a
data set you will interact with the completeDataSetRegistrations resource:
/api/33/completeDataSetRegistrations
This resource supports DELETE for un-registration. The following query parameters are
supported:
Query
Required Description
parameter
ds Yes Data set identifier
101
1 Web API 1.34 Approbation des données
Query
Required Description
parameter
pe Yes Period identifier
ou Yes Organisation unit identifier
cc No (must Attribute combo identifier (for locking check)
combine
with cp)
cp No (must Attribute option identifiers, separated with ; for multiple
combine values (for locking check)
with cp)
multiOu No (default Whether registration applies to sub units
false)
This section explains how to approve, unapprove and check approval status using the
dataApprovals resource. Approval is done per data approval workflow, period, organisation unit
and attribute option combo.
/api/33/dataApprovals
To get approval information for a data set you can issue a GET request:
/api/dataApprovals?wf=rIUL3hYOjJc&pe=201801&ou=YuQRtpLP10I
Query
Required Description
parameter
wf Yes Data approval workflow identifier
pe Yes Period identifier
ou Yes Organisation unit identifier
aoc No Attribute option combination identifier
102
1 Web API 1.34.1 Get approval status
Note
For backward compatibility, the parameter ds for data set may be given
instead of wf for workflow in this and other data approval requests as
described below. If the data set is given, the workflow associated with
that data set will be used.
"mayApprove":
false,
"mayUnapprove":
false,
"mayAccept":
false,
"mayUnaccept":
false,
"state":
"UNAPPROVED_ELSEWHERE"
}
State Description
UNAPPROVABLE Data approval does not apply to this selection. (Data is
neither approved nor unapproved.)
UNAPPROVED_WAITING Data could be approved for this selection, but is waiting for
some lower-level approval before it is ready to be
approved.
UNAPPROVED_ELSEWHERE Data is unapproved, and is waiting for approval somewhere
else (not approvable here.)
UNAPPROVED_READY Data is unapproved, and is ready to be approved for this
selection.
APPROVED_HERE Data is approved, and was approved here (so could be
unapproved here.)
103
1 Web API 1.34.2 Bulk get approval status
State Description
APPROVED_ELSEWHERE Data is approved, but was not approved here (so cannot be
unapproved here.) This covers the following cases:
Note that when querying for the status of data approval, you may specify any combination of the
query parameters. The combination you specify does not need to describe the place where data
is to be approved at one of the approval levels. For example:
• The organisation unit might not be at an approval level. The approval status is determined
by whether data is approved at an approval level for an ancestor of the organisation unit.
• You may specify individual attribute category options. The approval status is determined
by whether data is approved for an attribute category option combination that includes
one or more of these options.
• You may specify a time period that is longer than the period for the data set at which the
data is entered and approved. The approval status is determined by whether the data is
approved for all the data set periods within the period you specify.
For data sets which are associated with a category combo you might want to fetch data approval
records for individual attribute option combos from the following resource with a GET request:
/api/dataApprovals/categoryOptionCombos?wf=rIUL3hYOjJc&pe=201801&ou=YuQRtpLP10I
To get a list of multiple approval statuses, you can issue a GET request similar to this:
/api/dataApprovals/approvals?wf=rIUL3hYOjJc&pe=201801,201802&ou=YuQRtpLP10I
The parameters wf, pe, ou, and aoc are the same as for getting a single approval status, except
that you can provide a comma-separated list of one or more values for each parameter.
This will give you a response containing a list of approval parameters and statuses, something
like this:
[
{
104
1 Web API 1.34.2 Bulk get approval status
"aoc":
"HllvX50cXC0",
"pe":
"201801",
"level":
"KaTJLhGmU95",
"ou":
"YuQRtpLP10I",
"permissions":
{
"mayApprove":
false,
"mayUnapprove":
true,
"mayAccept":
true,
"mayUnaccept":
false,
"mayReadData":
true
},
"state":
"APPROVED_HERE",
"wf":
"rIUL3hYOjJc"
},
{
"aoc":
"HllvX50cXC0",
"pe":
"201802",
"ou":
"YuQRtpLP10I",
"permissions":
{
"mayApprove":
true,
"mayUnapprove":
false,
"mayAccept":
false,
"mayUnaccept":
false,
"mayReadData":
true
},
"state":
"UNAPPROVED_READY",
"wf":
"rIUL3hYOjJc"
}
]
105
1 Web API 1.34.3 Approve data
Field Description
aoc Attribute option combination identifier
pe Period identifier
ou Organisation Unit identifier
permissions The permissions: ‘mayApprove’, ‘mayUnapprove’, ‘mayAccept’,
‘mayUnaccept’, and ‘mayReadData’ (same definitions as for get single
approval status).
state One of the data approval states (same as for get single approval status.)
wf Data approval workflow identifier
To approve data you can issue a POST request to the dataApprovals resource. To un-approve
data, you can issue a DELETE request to the dataApprovals resource.
To accept data that is already approved you can issue a POST request to the dataAcceptances
resource. To un-accept data, you can issue a DELETE request to the dataAcceptances resource.
Action
Required Description
parameter
wf Yes Data approval workflow identifier
pe Yes Period identifier
ou Yes Organisation unit identifier
aoc No Attribute option combination identifier
Note that, unlike querying the data approval status, you must specify parameters that
correspond to a selection of data that could be approved. In particular, both of the following
must be true:
• The organisation unit’s level must be specified by an approval level in the workflow.
• The time period specified must match the period type of the workflow.
POST /api/33/dataApprovals/approvals
106
1 Web API 1.34.5 Get data approval levels
POST /api/33/dataApprovals/unapprovals
POST /api/33/dataAcceptances/acceptances
POST /api/33/dataAcceptances/unacceptances
{
"wf":
["pBOMPrpg1QX",
"lyLU2wR22tC"],
"pe":
["201601",
"201602"],
"approvals":
[
{
"ou":
"cDw53Ej8rju",
"aoc":
"ranftQIH5M9"
},
{
"ou":
"cDw53Ej8rju",
"aoc":
"fC3z1lcAW5x"
}
]
}
To retrieve data approval workflows and their data approval levels you can make a GET request
similar to this:
/api/dataApprovalWorkflows?
fields=id,name,periodType,dataApprovalLevels[id,name,level,orgUnitLevel]
107
1 Web API 1.35 Auditing
1.35 Auditing
DHIS2 does automatic auditing on all updates and deletions of aggregate data values, tracked
entity data values, tracked entity attribute values, and data approvals. This section explains how
to fetch this data.
The endpoint for aggregate data value audits is located at /api/audits/dataValue, and the
available parameters are displayed in the table below.
/api/33/audits/dataValue?ds=lyLU2wR22tC
The endpoint for tracked entity data value audits is located at /api/audits/
trackedEntityDataValue, and the available parameters are displayed in the table below.
108
1 Web API 1.35.3 Tracked entity attribute value audits
/api/33/audits/trackedEntityDataValue?de=eMyVanycQSC&de=qrur9Dvnyt5
The endpoint for tracked entity attribute value audits is located at /api/audits/
trackedEntityAttributeValue, and the available parameters are displayed in the table
below.
/api/33/audits/trackedEntityAttributeValue?tea=VqEFza8wbwA
Once auditing is enabled for tracked entity instances (by setting allowAuditLog of tracked entity
types to true), all read and search operations are logged. The endpoint for accessing audit logs is
api/audits/trackedEntityInstance. Below are available parameters to interact with this endpoint.
109
1 Web API 1.35.5 Enrollment audits
Get all tracked entity instance audits of type READ with startDate=2018-03-01 and
endDate=2018-04-24 in a page size of 5:
/api/33/audits/[Link]?startDate=2018-03-01
&endDate=2018-04-24&auditType=READ&pageSize=5
Once auditing is enabled for enrollments (by setting allowAuditLog of tracker programs to true),
all read operations are logged. The endpoint for accessing audit logs is api/audits/enrollment.
Below are available parameters to interact with this endpoint.
Get all enrollment audits with startDate=2018-03-01 and endDate=2018-04-24 in a page size of 5:
/api/audits/[Link]?startDate=2018-03-01&endDate=2018-04-24&pageSize=5
/api/audits/[Link]?user=admin
The endpoint for data approval audits is located at /api/audits/dataApproval, and the available
parameters are displayed in the table below.
110
1 Web API 1.36 Message conversations
/api/33/audits/dataApproval?wf=RwNpkAM7Hw7
DHIS2 features a mechanism for sending messages for purposes such as user feedback,
notifications, and general information to users. Messages are grouped into conversations. To
interact with message conversations you can send POST and GET request to the
messageConversations resource.
/api/33/messageConversations
Messages are delivered to the DHIS2 message inbox but can also be sent to the user’s email
addresses and mobile phones as SMS. In this example, we will see how we can utilize the Web
API to send, read and manage messages. We will pretend to be the DHIS2 Administrator user and
send a message to the Mobile user. We will then pretend to be the mobile user and read our new
message. Following this, we will manage the admin user inbox by marking and removing
messages.
The resource we need to interact with when sending and reading messages is the
messageConversations resource. We start by visiting the Web API entry point at http://
[Link]/demo/api where we find and follow the link to the messageConversations
resource at [Link] The description tells us that
we can use a POST request to create a new message using the following XML format for sending
to multiple users:
111
1 Web API 1.36.1 Writing and reading messages
<message xmlns="http://
[Link]/
schema/dxf/2.0">
<subject>This is the
subject</
subject>
<text>This is
the
text</
text>
<users>
<user
id="user1ID" />
<user
id="user2ID" />
<user
id="user3ID" />
</
users>
</
message>
For sending to all users contained in one or more user groups, we can use:
<message xmlns="http://
[Link]/
schema/dxf/2.0">
<subject>This is the
subject</
subject>
<text>This is
the
text</
text>
<userGroups>
<userGroup
id="userGroup1ID" />
<userGroup
id="userGroup2ID" />
<userGroup
id="userGroup3ID" />
</
userGroups>
</
message>
For sending to all users connected to one or more organisation units, we can use:
<message xmlns="http://
[Link]/
schema/dxf/2.0">
<subject>This is the
subject</
subject>
<text>This is
the
text</
text>
<organisationUnits>
<organisationUnit
id="ou1ID" />
112
1 Web API 1.36.1 Writing and reading messages
<organisationUnit
id="ou2ID" />
<organisationUnit
id="ou3ID" />
</
organisationUnits>
</
message>
Since we want to send a message to our friend the mobile user we need to look up her identifier.
We do so by going to the Web API entry point and follow the link to the users resource at /api/
users. We continue by following link to the mobile user at /api/users/PhzytPW3g2J where
we learn that her identifier is PhzytPW3g2J. We are now ready to put our XML message together
to form a message where we want to ask the mobile user whether she has reported data for
January 2014:
<message xmlns="http://
[Link]/
schema/dxf/2.0">
<subject>Mortality data
reporting</subject>
<text>Have you reported data for the Mortality data set for
January 2014?</text>
<users>
<user
id="PhzytPW3g2J" />
</
users>
</
message>
To test this we save the XML content into a file called [Link]. We use cURL to dispatch the
message the DHIS2 demo instance where we indicate that the content-type is XML and
authenticate as the admin user:
"subject":
"Hey",
"text":
"How
are
you?",
"users":
[
{
"id":
"OYLGMiazHtW"
},
{
"id":
"N3PZBUlN8vq"
}
113
1 Web API 1.36.1 Writing and reading messages
],
"userGroups":
[
{
"id":
"ZoHNWQajIoe"
}
],
"organisationUnits":
[
{
"id":
"DiszpKrYNg8"
}
]
}
If all is well we receive a 201 Created HTTP status code. Also, note that we receive a Location
HTTP header which value informs us of the URL of the newly created message conversation
resource - this can be used by a consumer to perform further action.
We will now pretend to be the mobile user and read the message which was just sent by
dispatching a GET request to the messageConversations resource. We supply an Accept header
with application/xml as the value to indicate that we are interested in the XML resource
representation and we authenticate as the mobile user:
curl "[Link]
messageConversations"
-H "Accept:application/xml" -u
mobile:district
<messageConversations xmlns="http://
[Link]/schema/dxf/2.0"
link="[Link]
messageConversations">
<messageConversation
name="Mortality data
reporting" id="ZjHHSjyyeJ2"
link="[Link]
messageConversations/ZjHHSjyyeJ2"/>
<messageConversation name="DHIS2
version 2.7 is deployed"
id="GDBqVfkmnp2"
link="[Link]
messageConversations/GDBqVfkmnp2"/>
</
messageConversations>
From the response, we are able to read the identifier of the newly sent message which is
ZjHHSjyyeJ2. Note that the link to the specific resource is embedded and can be followed in order
to read the full message. We can reply directly to an existing message conversation once we
know the URL by including the message text as the request payload. We are now able to
construct a URL for sending our reply:
114
1 Web API 1.36.2 Managing messages
If all went according to plan you will receive a 200 OK status code.
queryString=?&queryOperator=?
The filter searches for matches in subject, text, and senders for message conversations. The
default query operator is token, however other operators can be defined in the query.
As users receive and send messages, conversations will start to pile up in their inboxes,
eventually becoming laborious to track. We will now have a look at managing a user’s messages
inbox by removing and marking conversations through the Web-API. We will do so by performing
some maintenance in the inbox of the “DHIS Administrator” user.
First, let’s have a look at removing a few messages from the inbox. Be sure to note that all
removal operations described here only remove the relation between a user and a message
conversation. In practical terms this means that we are not deleting the messages themselves (or
any content for that matter) but are simply removing the message thread from the user such that
it is no longer listed in the /api/messageConversations resource.
To remove a message conversation from a users inbox we need to issue a DELETE request to the
resource identified by the id of the message conversation and the participating user. For
example, to remove the user with id xE7jOejl9FI from the conversation with id jMe43trzrdi:
curl "[Link]
messageConversations/jMe43trzrdi
If the request was successful the server will reply with a 200 OK. The response body contains an
XML or JSON object (according to the accept header of the request) containing the id of the
removed user.
"removed":
["xE7jOejl9FI"]
}
On failure the returned object will contain a message payload which describes the error.
{
"message": "No user
with uid:
dMV6G0tPAEa"
}
The observant reader will already have noticed that the object returned on success in our
example is actually a list of ids (containing a single entry). This is due to the endpoint also
115
1 Web API 1.36.2 Managing messages
supporting batch removals. The request is made to the same messageConversations resource
but follows slightly different semantics. For batch operations, the conversation ids are given as
query string parameters. The following example removes two separate message conversations
for the current user:
curl "[Link]
mc=WzMRrCosqc0&mc=lxCjiigqrJm"
-X DELETE -u
admin:district
If you have sufficient permissions, conversations can be removed on behalf of another user by
giving an optional user id parameter.
curl "[Link]
mc=WzMRrCosqc0&mc=lxCjiigqrJm&user=PhzytPW3g2J"
-X DELETE -u
admin:district
As indicated, batch removals will return the same message format as for single operations. The
list of removed objects will reflect successful removals performed. Partially erroneous requests
(i.e. non-existing id) will therefore not cancel the entire batch operation.
Messages carry a boolean read property. This allows tracking whether a user has seen (opened) a
message or not. In a typical application scenario (e.g. the DHIS2 web portal) a message will be
marked read as soon as the user opens it for the first time. However, users might want to
manage the read or unread status of their messages in order to keep track of certain
conversations.
Marking messages read or unread follows similar semantics as batch removals, and also
supports batch operations. To mark messages as read we issue a POST to the
messageConversations/read resource with a request body containing one or more message
ids. To mark messages as unread we issue an identical request to the messageConversations/
unread resource. As is the case for removals, an optional user request parameter can be given.
curl "[Link]
messageConversations/read"
-d
'["ZrKML5WiyFm","Gc03smoTm6q"]' -X POST
-H "Content-Type: application/json" -u
admin:district
{
"markedRead":
["ZrKML5WiyFm",
"Gc03smoTm6q"]
}
You can add recipients to an existing message conversation. The resource is located at:
/api/33/messageConversations/id/recipients
The options for this resource is a list of users, user groups and organisation units. The request
should look like this:
116
1 Web API 1.36.3 Message Attachments
"users":
[
{
"id":
"OYLGMiazHtW"
},
{
"id":
"N3PZBUlN8vq"
}
],
"userGroups":
[
{
"id":
"DiszpKrYNg8"
}
],
"organisationUnits":
[
{
"id":
"DiszpKrYNg8"
}
]
}
Creating messages with attachments is done in two steps: uploading the file to the attachments
resource, and then including one or several of the attachment IDs when creating a new message.
A POST request to the attachments resource will upload the file to the server.
The request returns an object that represents the attachment. The id of this object must be used
when creating a message in order to link the attachment with the message.
{
"created":
"2018-07-20T[Link].210",
"lastUpdated":
"2018-07-20T[Link].212",
"externalAccess":
false,
"publicAccess":
"--------",
"user":
{
"name":
"John
Traore",
"created":
"2013-04-18T[Link].407",
"lastUpdated":
"2018-03-09T[Link].512",
117
1 Web API 1.36.3 Message Attachments
"externalAccess":
false,
"displayName":
"John
Traore",
"favorite":
false,
"id":
"xE7jOejl9FI"
},
"lastUpdatedBy":
{
"id":
"xE7jOejl9FI",
"name":
"John
Traore"
},
"favorite":
false,
"id":
"fTpI4GOmujz"
}
When creating a new message, the ids can be passed in the request body to link the uploaded
files to the message being created.
"subject":
"Hey",
"text":
"How
are
you?",
"users":
[
{
"id":
"OYLGMiazHtW"
},
{
"id":
"N3PZBUlN8vq"
}
],
"userGroups":
[
{
"id":
"ZoHNWQajIoe"
}
],
"organisationUnits":
[
{
"id":
"DiszpKrYNg8"
}
],
118
1 Web API 1.36.4 Tickets and Validation Result Notifications
"attachments":
["fTpI4GOmujz",
"h2ZsOxMFMfq"]
}
Once a message with an attachment has been created, the attached file can be accessed with a
GET request to the following URL:
/api/messageConversations/<mcv-id>/<msg-id>/attachments/<attachment-id>
Where is the message conversation ID, is the ID of the message that contains the attachment and
is the ID of the specific message attachment.
You can use the “write feedback” tool to create tickets and messages. The only difference
between a ticket and a message is that you can give a status and a priority to a ticket. To set the
status:
POST /api/messageConversations/<uid>/status
POST /api/messageConversations/<uid>/priority
In 2.29, messages generated by validation analysis now also be used in the status and priority
properties. By default, messages generated by validation analysis will inherit the priority of the
validation rule in question, or the highest importance if the message contains multiple rules.
In 2.30, validation rules can be assigned to any user while tickets still need to be assigned to a
user in the system’s feedback recipient group.
Status Priority
OPEN LOW
PENDING MEDIUM
INVALID HIGH
SOLVED
119
1 Web API 1.37 Interprétations
You can also add an internal message to a ticket, which can only be seen by users who have
“Manage tickets” permissions. To create an internal reply, include the “internal” parameter, and
set it to
curl -d "This is an
internal
message"
"[Link]
internal=true"
-H "Content-Type:text/plain" -u
admin:district -X POST
1.37 Interprétations
For resources related to data analysis in DHIS2, such as pivot tables, charts, maps, event reports
and event charts, you can write and share data interpretations. An interpretation can be a
comment, question, observation or interpretation about a data report or visualization.
/api/interpretations
To read interpretations we will interact with the /api/interpretations resource. A typical GET
request using field filtering can look like this:
GET /api/interpretations?fields=*,comments[id,text,user,mentions]
The output in JSON response format could look like below (additional fields omitted for brevity):
"interpretations":
[
{
"id":
"XSHiFlHAhhh",
"created":
"2013-05-30T[Link].181+0000",
"text": "Data looks suspicious, could be a
data entry mistake.",
"type":
"REPORT_TABLE",
"likes":
2,
"user":
{
"id":
"uk7diLujYif"
},
"reportTable":
{
"id":
"LcSxnfeBxyi"
},
"visualization":
{
"id":
"LcSxnfeBxyi"
120
1 Web API 1.37.1 Reading interpretations
}
},
{
"id":
"kr4AnZmYL43",
"created":
"2013-05-29T[Link].081+0000",
"text": "Delivery rates
in Bo looks high.",
"type":
"CHART",
"likes":
3,
"user":
{
"id":
"uk7diLujYif"
},
"chart":
{
"id":
"HDEDqV3yv3H"
},
"visualization":
{
"id":
"HDEDqV3yv3H"
},
"mentions":
[
{
"created":
"2018-06-25T[Link].498",
"username":
"boateng"
}
],
"comments":
[
{
"id":
"iB4Etq8yTE6",
"text": "This report
indicates a surge.",
"user":
{
"id":
"B4XIfwOcGyI"
}
},
{
"id":
"iB4Etq8yTE6",
"text": "Likely caused by
heavy rainfall.",
"user":
{
"id":
"B4XIfwOcGyI"
}
},
121
1 Web API 1.37.1 Reading interpretations
{
"id":
"SIjkdENan8p",
"text": "Have a look
at this @boateng.",
"user":
{
"id":
"xE7jOejl9FI"
},
"mentions":
[
{
"created":
"2018-06-25T[Link].316",
"username":
"boateng"
}
]
}
]
}
]
}
Interpretation fields
Field Description
id The interpretation identifier.
created The time of when the interpretation was created.
type The type of analytical object being interpreted. Valid options:
REPORT_TABLE, CHART, MAP, EVENT_REPORT, EVENT_CHART,
DATASET_REPORT.
user Association to the user who created the interpretation.
reportTable Association to the report table if type is REPORT_TABLE.
chart Association to the chart if type is CHART.
visualization Association to the visualization if type is CHART or REPORT_TABLE
(both types are in deprecation process in favour of VISUALIZATION).
map Association to the map if type is MAP.
eventReport Association to the event report is type is EVENT_REPORT.
eventChart Association to the event chart if type is EVENT_CHART.
dataSet Association to the data set if type is DATASET_REPORT.
comments Array of comments for the interpretation. The text field holds the
actual comment.
mentions Array of mentions for the interpretation. A list of users identifiers.
For all analytical objects you can append /data to the URL to retrieve the data associated with the
resource (as opposed to the metadata). As an example, by following the map link and appending
/data one can retrieve a PNG (image) representation of the thematic map through the following
URL:
122
1 Web API 1.37.2 Writing interpretations
[Link]
For all analytical objects you can filter by mentions. To retrieve all the interpretations/comments
where a user has been mentioned you have three options. You can filter by the interpretation
mentions (mentions in the interpretation description):
GET /api/interpretations?fields=*,comments[*]&filter=[Link]:in:[boateng]
You can filter by the interpretation comments mentions (mentions in any comment):
GET /api/interpretations?fields=*,comments[*]
&filter=[Link]:in:[boateng]
You can filter by intepretations which contains the mentions either in the interpretation or in any
comment (OR junction):
GET /api/interpretations?fields=*,comments[*]&filter=mentions:in:[boateng]
When writing interpretations you will supply the interpretation text as the request body using a
POST request with content type “text/plain”. The URL pattern looks like the below, where {object-
type} refers to the type of the object being interpreted and {object-id} refers to the identifier of
the object being interpreted.
/api/interpretations/{object-type}/{object-id}
Valid options for object type are reportTable, chart, map, eventReport, eventChart and
dataSetReport.
Note
The charts and reportTables APIs are deprecated. We recommend
using the visualizations API instead.
/api/interpretations/reportTable/yC86zJxU1i1
/api/interpretations/chart/ZMuYVhtIceD
/api/interpretations/visualization/hQxZGXqnLS9
/api/interpretations/map/FwLHSMCejFu
/api/interpretations/eventReport/xJmPLGP3Cde
/api/interpretations/eventChart/nEzXB2M9YBz
/api/interpretations/dataSetReport/tL7eCjmDIgM
As an example, we will start by writing an interpretation for the chart with identifier
EbRN2VIbPdV. To write chart interpretations we will interact with the /api/interpretations/
chart/{chartId} resource. The interpretation will be the request body. Based on this we can
put together the following request using cURL:
123
1 Web API 1.37.3 Updating and removing interpretations
Notice that the response provides a Location header with a value indicating the location of the
created interpretation. This is useful from a client perspective when you would like to add a
comment to the interpretation.
To update an existing interpretation you can use a PUT request where the interpretation text is
the request body using the following URL pattern, where {id} refers to the interpretation
identifier:
/api/interpretations/{id}
You can use the same URL pattern as above using a DELETE request to remove the
interpretation.
When writing comments to interpretations you will supply the comment text as the request body
using a POST request with content type “text/plain”. The URL pattern looks like the below, where
{interpretation-id} refers to the interpretation identifier.
/api/interpretations/{interpretation-id}/comments
Second, we will write a comment to the interpretation we wrote in the example above. By looking
at the interpretation response you will see that a Location header is returned. This header tells us
the URL of the newly created interpretation and from that, we can read its identifier. This
identifier is randomly generated so you will have to replace the one in the command below with
your own. To write a comment we can interact with the /api/interpretations/{id}/
comments resource like this:
124
1 Web API 1.37.5 Updating and removing interpretation comments
To updating an interpretation comment you can use a PUT request where the comment text is
the request body using the following URL pattern:
/api/interpretations/{interpretation-id}/comments/{comment-id}
curl "[Link]
comments/idAzzhVWvh2"
-d "I agree with that." -X PUT -H "Content-Type:text/
plain" -u admin:district
You can use the same URL pattern as above using a DELETE request to the remove the
interpretation comment.
To like an interpretation you can use an empty POST request to the like resource:
POST /api/interpretations/{id}/like
A like will be added for the currently authenticated user. A user can only like an interpretation
once.
To remove a like for an interpretation you can use a DELETE request to the same resource as for
the like operation.
The like status of an interpretation can be viewed by looking at the regular Web API
representation:
GET /api/interpretations/{id}
The like information is found in the likes field, which represents the number of likes, and the
likedBy array, which enumerates the users who have liked the interpretation.
{
"id":
"XSHiFlHAhhh",
"text": "Data looks suspicious, could be a
data entry mistake.",
"type":
"REPORT_TABLE",
"likes":
2,
"likedBy":
[
{
"id":
"k7Hg12fJ2f1"
},
{
"id":
"gYhf26fFkjFS"
125
1 Web API 1.38 Viewing analytical resource representations
}
]
}
DHIS2 has several resources for data analysis. These resources include charts, maps,
reportTables, reports and documents. By visiting these resources you will retrieve information
about the resource. For instance, by navigating to /api/charts/R0DVGvXDUNP the response will
contain the name, last date of modification and so on for the chart. To retrieve the analytical
representation, for instance, a PNG representation of the chart, you can append /data to all these
resources. For instance, by visiting /api/charts/R0DVGvXDUNP/data the system will return a
PNG image of the chart.
Analytical resources
The data content of the analytical representations can be modified by providing a date query
parameter. This requires that the analytical resource is set up for relative periods for the period
dimension.
Query
Value Description
parameter
date Date in yyyy-MM-dd Basis for relative periods in report (requires
format relative periods)
Query
Description
parameter
width Width of image in pixels
height Height of image in pixels
Some examples of valid URLs for retrieving various analytical representations are listed below.
126
1 Web API 1.39 Plugins
/api/charts/R0DVGvXDUNP/data
/api/charts/R0DVGvXDUNP/data?date=2013-06-01
/api/reportTables/jIISuEWxmoI/[Link]
/api/reportTables/jIISuEWxmoI/[Link]?date=2013-01-01
/api/reportTables/FPmvWs7bn2P/[Link]
/api/reportTables/FPmvWs7bn2P/[Link]
/api/maps/DHE98Gsynpr/data
/api/maps/DHE98Gsynpr/data?date=2013-07-01
/api/reports/OeJsA6K1Otx/[Link]
/api/reports/OeJsA6K1Otx/[Link]?date=2014-01-01
1.39 Plugins
DHIS2 comes with plugins which enable you to embed live data directly in your web portal or web
site. Currently, plugins exist for charts, maps and pivot tables.
Please be aware that all of the code examples in this section are for demonstration purposes
only. They should not be used as is in production systems. To make things simple, the credentials
(admin/district) have been embedded into the scripts. In a real scenario, you should never
expose credentials in javascript as it opens a vulnerability to the application. In addition, you
would create a user with more minimal privileges rather than make use of a superuser to fetch
resources for your portal.
It is possible to workaround exposing the credentials by using a reverse proxy such as nginx or
apache2. The proxy can be configured to inject the required Authorization header for only the
endpoints that you wish to make public. There is some documentation to get you started in the
section of the implementers manual which describes reverse proxy configuration.
In this example, we will see how we can embed good-looking, light-weight html pivot tables with
data served from a DHIS2 back-end into a Web page. To accomplish this we will use the Pivot
table plug-in. The plug-in is written in Javascript and depends on the jQuery library only. A
complete working example can be found at [Link] Open the
page in a web browser and view the source to see how it is set up.
We start by having a look at what the complete html file could look like. This setup puts two
tables in our web page. The first one is referring to an existing table. The second is configured
inline.
<!
DOCTYPE
html>
<html>
<head>
<script src="[Link]
[Link]"></script>
<script src="[Link]
plugin/[Link]"></script>
<script>
[Link] =
"[Link]
[Link]
=
"admin";
127
1 Web API 1.39.1 Embedding pivot tables with the Pivot Table plug-in
[Link]
=
"district";
[Link]
= true;
var r1 = { el:
"report1",
id:
"R0DVGvXDUNP" };
var
r2
=
{
el:
"report2",
columns: [
dimension:
"dx",
items: [{ id:
"YtbsuPPo010" }, { id:
"l6byfWFUGaP" }]
}
],
rows: [{ dimension:
"pe", items: [{ id:
"LAST_12_MONTHS" }] }],
filters: [{
dimension: "ou",
items: [{ id:
"USER_ORGUNIT" }] }],
showColTotals:
false,
showRowTotals:
false,
showColSubTotals:
false,
showRowSubTotals:
false,
showDimensionLabels:
false,
hideEmptyRows:
true,
skipRounding:
true,
aggregationType:
"AVERAGE",
128
1 Web API 1.39.1 Embedding pivot tables with the Pivot Table plug-in
showHierarchy:
true,
completedOnly:
true,
displayDensity:
"COMFORTABLE",
fontSize:
"SMALL",
digitGroupSeparator:
"COMMA",
legendSet:
{ id:
"fqs276KXCXi" }
};
[Link]([r1,
r2]);
</
script>
</
head>
<body>
<div
id="report1"></div>
<div
id="report2"></div>
</
body>
</html>
Two files are included in the header section of the HTML document. The first file is the jQuery
JavaScript library (we use the DHIS2 content delivery network in this case). The second file is the
Pivot table plug-in. Make sure the path is pointing to your DHIS2 server installation.
Now let us have a look at the various options for the Pivot tables. One property is required: el
(please refer to the table below). Now, if you want to refer to pre-defined tables already made
inside DHIS2 it is sufficient to provide the additional id parameter. If you instead want to
configure a pivot table dynamically you should omit the id parameter and provide data
dimensions inside a columns array, a rows array and optionally a filters array instead.
A data dimension is defined as an object with a text property called dimension. This property
accepts the following values: dx (indicator, data element, data element operand, data set, event
data item and program indicator), pe (period), ou (organisation unit) or the id of any organisation
unit group set or data element group set (can be found in the web api). The data dimension also
has an array property called items which accepts objects with an id property.
To sum up, if you want to have e.g. “ANC 1 Coverage”, “ANC 2 Coverage” and “ANC 3 Coverage” on
the columns in your table you can make the following columns config:
columns:
[{
dimension:
"dx",
items:
[
129
1 Web API 1.39.1 Embedding pivot tables with the Pivot Table plug-in
{id:
"Uvn6LCg7dVU"}, //
the id
of ANC 1
Coverage
{id:
"OdiHJayrsKo"}, //
the id
of ANC 2
Coverage
{id:
"sB79w2hiLp8"} //
the id of
ANC 3
Coverage
]
}]
Options
Param Type Required (default Description
first)
url string Yes Base URL of the DHIS2
server
username string Yes (if Used for authentication if
cross- the server is running on a
domain) different domain
password string Yes (if Used for authentication if
cross- the server is running on a
domain) different domain
loadingIndicator boolean No Whether to show a loading
indicator before the table
appears
id string No
130
1 Web API 1.39.1 Embedding pivot tables with the Pivot Table plug-in
filter array No
title string No
131
1 Web API 1.39.1 Embedding pivot tables with the Pivot Table plug-in
legendSet object No
userOrgUnit string / No
array
relativePeriodDate string No
132
1 Web API 1.39.2 Embedding charts with the Visualizer chart plug-in
In this example, we will see how we can embed good-looking Highcharts charts (http://
[Link]) with data served from a DHIS2 back-end into a Web page. To accomplish
this we will use the DHIS2 Visualizer plug-in. The plug-in is written in JavaScript and depends on
the jQuery library. A complete working example can be found at [Link]
[Link]. Open the page in a web browser and view the source to see how it is set up.
We start by having a look at what the complete html file could look like. This setup puts two
charts on our web page. The first one is referring to an existing chart. The second is configured
inline.
<!
DOCTYPE
html>
<html>
<head>
<script src="[Link]
[Link]"></script>
<script src="[Link]
v227/plugin/[Link]"></script>
<script>
[Link] =
"[Link]
demo";
[Link]
=
"admin";
[Link]
=
"district";
[Link]
= true;
var r1 = { el:
"report1",
id:
"R0DVGvXDUNP" };
var
r2
=
{
el:
"report2",
columns: [
dimension:
"dx",
items: [{ id:
"YtbsuPPo010" }, { id:
"l6byfWFUGaP" }]
}
],
133
1 Web API 1.39.2 Embedding charts with the Visualizer chart plug-in
rows: [{ dimension:
"pe", items: [{ id:
"LAST_12_MONTHS" }] }],
filters: [{
dimension: "ou",
items: [{ id:
"USER_ORGUNIT" }] }],
type:
"line",
showValues:
false,
hideEmptyRows:
true,
regressionType:
"LINEAR",
completedOnly:
true,
targetLineValue:
100,
targetLineTitle: "My
target line title",
baseLineValue:
20,
baseLineTitle:
"My base line
title",
aggregationType:
"AVERAGE",
rangeAxisMaxValue:
100,
rangeAxisMinValue:
20,
rangeAxisSteps:
5,
rangeAxisDecimals:
2,
rangeAxisTitle:
"My range axis
title",
domainAxisTitle: "My
domain axis title",
hideLegend:
true
};
// Render
the
charts
[Link](r1,
r2);
</
script>
</
head>
134
1 Web API 1.39.2 Embedding charts with the Visualizer chart plug-in
<body>
<div
id="report1"></div>
<div
id="report2"></div>
</
body>
</html>
Two files are included in the header section of the HTML document. The first file is the jQuery
JavaScript library (we use the DHIS2 content delivery network in this case). The second file is the
Visualizer chart plug-in. Make sure the path is pointing to your DHIS2 server installation.
Now let us have a look at the various options for the charts. One property is required: el (please
refer to the table below). Now, if you want to refer to pre-defined charts already made inside
DHIS2 it is sufficient to provide the additional id parameter. If you instead want to configure a
chart dynamically you should omit the id parameter and provide data dimensions inside a
columns array, a rows array and optionally a filters array instead.
A data dimension is defined as an object with a text property called dimension. This property
accepts the following values: dx (indicator, data element, data element operand, data set, event
data item and program indicator), pe (period), ou (organisation unit) or the id of any organisation
unit group set or data element group set (can be found in the web api). The data dimension also
has an array property called items which accepts objects with an id property.
To sum up, if you want to have e.g. “ANC 1 Coverage”, “ANC 2 Coverage” and “ANC 3 Coverage” on
the columns in your chart you can make the following columns config:
columns:
[{
dimension:
"dx",
items:
[
{id:
"Uvn6LCg7dVU"}, //
the id
of ANC 1
Coverage
{id:
"OdiHJayrsKo"}, //
the id
of ANC 2
Coverage
{id:
"sB79w2hiLp8"} //
the id of
ANC 3
Coverage
]
}]
Options
Param Type Required (default Description
first)
url string Yes Base URL of the DHIS2
server
135
1 Web API 1.39.2 Embedding charts with the Visualizer chart plug-in
Options
Param Type Required (default Description
first)
username string Yes (if Used for authentication if
cross- the server is running on a
domain) different domain
password string Yes (if Used for authentication if
cross- the server is running on a
domain) different domain
loadingIndicator boolean No Whether to show a loading
indicator before the chart
appears
Chart configuration
id string No
filter array No
title string No
136
1 Web API 1.39.2 Embedding charts with the Visualizer chart plug-in
targetLineValue number No
targetLineTitle string No
baseLineValue number No
baseLineTitle string No
rangeAxisTitle number No
rangeAxisMaxValue number No
rangeAxisMinValue number No
137
1 Web API 1.39.3 Embedding maps with the GIS map plug-in
rangeAxisDecimals number No
domainAxisTitle number No
userOrgUnit string / No
array
relativePeriodDate string No
In this example we will see how we can embed maps with data served from a DHIS2 back-end
into a Web page. To accomplish this we will use the GIS map plug-in. The plug-in is written in
JavaScript and depends on the Ext JS library only. A complete working example can be found at
138
1 Web API 1.39.3 Embedding maps with the GIS map plug-in
[Link] Open the page in a web browser and view the source to
see how it is set up.
We start by having a look at what the complete html file could look like. This setup puts two maps
on our web page. The first one is referring to an existing map. The second is configured inline.
<!
DOCTYPE
html>
<html>
<head>
<link
rel="stylesheet"
type="text/css"
href="[Link]
[Link]"
/>
<script src="[Link]
v215/ext/[Link]"></script>
<script src="[Link]
api/js?sensor=false"></script>
<script src="[Link]
openlayers/[Link]"></script>
<script src="[Link]
v215/plugin/[Link]"></script>
<script>
var base = "https://
[Link]/demo";
[Link](function()
{
[Link]({
url: base + "dhis-web-commons-
security/[Link]",
method:
"POST",
params: {
j_username:
"portal",
j_password:
"Portal123" },
success:
setLinks
});
});
function
setLinks()
{
[Link]({ url:
base, el: "map1",
id:
"ytkZY3ChM6J" });
[Link]({
url:
base,
139
1 Web API 1.39.3 Embedding maps with the GIS map plug-in
el:
"map2",
mapViews: [
{
columns: [{
dimension: "in", items:
[{ id:
"Uvn6LCg7dVU" }] }], //
data
rows: [
dimension:
"ou",
items: [{ id:
"LEVEL-3" }, { id:
"ImspTQPwCqd" }]
}
], //
organisation units,
filters: [{ dimension:
"pe", items: [{ id:
"LAST_3_MONTHS" }] }], //
period
// All following options
are optional
classes:
7,
colorLow:
"02079c",
colorHigh:
"e5ecff",
opacity:
0.9,
legendSet: {
id:
"fqs276KXCXi" }
}
]
});
}
</
script>
</
head>
<body>
<div
id="map1"></div>
<div
id="map2"></div>
</
body>
</html>
Four files and Google Maps are included in the header section of the HTML document. The first
two files are the Ext JS JavaScript library (we use the DHIS2 content delivery network in this case)
and its stylesheet. The third file is the OpenLayers JavaScript mapping framework (http://
[Link]) and finally we include the GIS map plug-in. Make sure the path is pointing to your
DHIS2 server installation.
140
1 Web API 1.39.3 Embedding maps with the GIS map plug-in
To authenticate with the DHIS2 server we use the same approach as in the previous section. In
the header of the HTML document we include the following Javascript inside a script element.
The setLinks method will be implemented later. Make sure the base variable is pointing to your
DHIS2 installation.
[Link]( function() {
[Link]({
url: base + "dhis-web-commons-security/[Link]",
method: "POST",
params: { j_username: "portal", j_password: "Portal123" },
success: setLinks
});
});
Now let us have a look at the various options for the GIS plug-in. Two properties are required: el
and url (please refer to the table below). Now, if you want to refer to pre-defined maps already
made in the DHIS2 GIS it is sufficient to provide the additional id parameter. If you instead want
to configure a map dynamically you should omit the id parameter and provide mapViews (layers)
instead. They should be configured with data dimensions inside a columns array, a rows array
and optionally a filters array instead.
A data dimension is defined as an object with a text property called dimension. This property
accepts the following values: in (indicator), de (data element), ds (data set), dc (data element
operand), pe (period), ou (organisation unit) or the id of any organisation unit group set or data
element group set (can be found in the web api). The data dimension also has an array property
called items which accepts objects with an id property.
To sum up, if you want to have a layer with e.g. “ANC 1 Coverage” in your map you can make the
following columns config:
columns:
[{
dimension:
"in", //
could be
"in",
"de",
"ds",
"dc",
"pe",
"ou" or
any
dimension
id
items: [{id:
"Uvn6LCg7dVU"}], //
the id
of ANC
1
Coverage
}]
141
1 Web API 1.39.3 Embedding maps with the GIS map plug-in
If no id is provided you must add map view objects with the following config options:
142
1 Web API 1.39.3 Embedding maps with the GIS map plug-in
We continue by adding one pre-defined and one dynamically configured map to our HTML
document. You can browse the list of available maps using the Web API here: http://
[Link]/demo/api/33/maps.
function
setLinks()
{
[Link]({ url:
base, el:
"map1", id:
"ytkZY3ChM6J" });
[Link]({
url:
base,
el:
"map2",
mapViews: [
columns:
[ //
Chart
series
columns:
[{dimension:
"in",
items:
[{id:
"Uvn6LCg7dVU"}]}], // data
],
rows: [ //
Chart
categories
143
1 Web API 1.39.4 Creating a chart carousel with the carousel plug-in
rows: [{dimension: "ou",
items: [{id: "LEVEL-3"},
{id:
"ImspTQPwCqd"}]}], //
organisation units
],
filters: [
filters:
[{dimension:
"pe",
items:
[{id:
"LAST_3_MONTHS"}]}], // period
],
// All following options
are optional
classes:
7,
colorLow:
"02079c",
colorHigh:
"e5ecff",
opacity:
0.9,
legendSet:
{id:
"fqs276KXCXi"}
]
});
}
Finally we include some div elements in the body section of the HTML document with the
identifiers referred to in the plug-in JavaScript.
<div
id="map1"></div>
<div
id="map2"></div>
The chart plug-in also makes it possible to create a chart carousel which for instance can be used
to create an attractive front page on a Web portal. To use the carousel we need to import a few
files in the head section of our HTML page:
<link
rel="stylesheet"
type="text/css"
href="[Link]
[Link]"
/>
<link
rel="stylesheet"
type="text/css"
href="[Link]
css/[Link]"
/>
144
1 Web API 1.39.4 Creating a chart carousel with the carousel plug-in
<script
type="text/javascript"
src="[Link]
release/[Link]"
></
script>
<script
type="text/javascript"
src="[Link]
carousel/[Link]"
></
script>
<script
type="text/javascript"
src="[Link]
plugin/[Link]"
></
script>
The first file is the CSS stylesheet for the chart plug-in. The second file is the CSS stylesheet for
the carousel widget. The third file is the Ext JavaScript framework which this plug-in depends on.
The fourth file is the carousel plug-in JavaScript file. The fifth file is the chart plug-in JavaScript file.
The paths in this example points at the DHIS2 demo site, make sure you update them to point to
your own DHIS2 installation.
Please refer to the section about the chart plug-in on how to do authentication.
To create a chart carousel we will first render the charts we want to include in the carousel using
the method described in the chart plug-in section. Then we create the chart carousel itself. The
charts will be rendered into div elements which all have a CSS class called chart. In the carousel
configuration we can then define a selector expression which refers to those div elements like
this:
[Link]({
uid:
"R0DVGvXDUNP",
el:
"chartA1",
url:
base });
[Link]({
uid:
"X0CPnV6uLjR",
el:
"chartA2",
url:
base });
[Link]({
uid:
"j1gNXBgwKVm",
el:
"chartA3",
url:
base });
[Link]({
uid:
"X7PqaXfevnL",
el:
"chartA4",
url:
base });
new
[Link]("chartCarousel",
{
145
1 Web API 1.40 SQL views
autoPlay:
true,
itemSelector:
"[Link]",
interval:
5,
showPlayButton:
true
});
The first argument in the configuration is the id of the div element in which you want to render
the carousel. The autoPlay configuration option refers to whether we want the carousel to start
when the user loads the Web page. The interval option defines how many seconds each chart
should be displayed. The showPlayButton defines whether we want to render a button for the
user to start and stop the carousel. Finally we need to insert the div elements in the body of the
HTML document:
<div
id="chartCarousel">
<div
id="chartA1"></div>
<div
id="chartA2"></div>
<div
id="chartA3"></div>
<div
id="chartA4"></div>
</div>
The SQL views resource allows you to create and retrieve the result set of SQL views. The SQL
views can be executed directly against the database and render the result set through the Web
API resource.
/api/sqlViews
SQL views are useful for creating data views which may be more easily constructed with SQL
compared combining the multiple objects of the Web API. As an example, lets assume we have
been asked to provide a view of all organization units with their names, parent names,
organization unit level and name, and the coordinates listed in the database. The view might look
something like this:
146
1 Web API 1.40.1 Criteria
WHERE
[Link]
is not
null
ORDER BY [Link],
[Link],
[Link]
We will use curl to first execute the view on the DHIS2 server. This is essentially a materialization
process, and ensures that we have the most recent data available through the SQL view when it
is retrieved from the server. You can first look up the SQL view from the api/sqlViews resource,
then POST using the following command:
The next step in the process is the retrieval of the [Link] basic structure of the URL is as
follows
[Link]
The {server} parameter should be replaced with your own server. The next part of the URL /
api/sqlViews/ should be appended with the specific SQL view identifier. Append either data
for XML data or [Link] for comma delimited values. Support response formats are json, xml,
csv, xls, html and html+css. As an example, the following command would retrieve XML data for
the SQL view defined above.
curl "[Link] -u
admin:district
• Materialized SQL view: SQL views which are materialized, meaning written to disk. Needs
to be updated to reflect changes in underlying tables. Supports criteria to filter result set.
• SQL queries: Plain SQL queries. Support inline variables for customized queries.
1.40.1 Criteria
You can do simple filtering on the columns in the result set by appending criteria query
parameters to the URL, using the column names and filter values separated by columns as
parameter values, on the following format:
/api/sqlViews/{id}/data?criteria=col1:value1&criteria=col2:value2
As an example, to filter the SQL view result set above to only return organisation units at level 4
you can use the following URL:
[Link]
147
1 Web API 1.40.2 Variables
1.40.2 Variables
SQL views support variable substitution. Variable substitution is only available for SQL view of
type query, meaning SQL views which are not created in the database but simply executed as
regular SQL queries. Variables can be inserted directly into the SQL query and must be on this
format:
${variable-key}
As an example, an SQL query that retrieves all data elements of a given value type where the
value type is defined through a variable can look like this:
These variables can then be supplied as part of the URL when requested through the sqlViews
Web API resource. Variables can be supplied on the following format:
/api/sqlViews/{id}/data?var=key1:value1&var=key2:value2
An example query corresponding to the example above can look like this:
/api/sqlViews/dI68mLkP1wN/[Link]?var=valueType:int
The valueType variable will be substituted with the int value, and the query will return data
elements with int value type.
The variable parameter must contain alphanumeric characters only. The variables must contain
alphanumeric, dash, underscore and whitespace characters only.
SQL Views of type query also support two system-defined variables that allow the query to access
information about the user executing the view:
variable means
${_current_user_id} the user’s database id
${_current_username} the user’s username
Values for these variables cannot be supplied as part of the URL. They are always filled with
information about the user.
For example, the following SQL view of type query shows all the organisation units that are
assigned to the user:
select [Link],
[Link]
from organisationunit
ou_user
join organisationunit ou on [Link] like
ou_user.path || '%'
join usermembership um on [Link] =
ou_user.organisationunitid
148
1 Web API 1.40.3 Filtering
where [Link] = $
{_current_user_id}
order by
[Link]
1.40.3 Filtering
The SQL view api supports data filtering, equal to the metadata object filter. For a complete list of
filter operators you can look at the documentation for metadata object filter.
To use filters, simply add them as parameters at the end of the request url for your SQL view like
this:
/api/sqlViews/w3UxFykyHFy/[Link]?filter=orgunit_level:eq:2&filter=orgunit_name:ilike:bo
This request will return a result including org units with “bo” in the name and which has org unit
level 2.
The following example will return all org units with orgunit_level 2 or 4:
/api/sqlViews/w3UxFykyHFy/[Link]?filter=orgunit_level:in:[2,4]
And last, an example to return all org units that does not start with “Bo”
/api/sqlViews/w3UxFykyHFy/[Link]?filter=orgunit_name:!like:Bo
The dashboard is designed to give you an overview of multiple analytical items like maps, charts,
pivot tables and reports which together can provide a comprehensive overview of your data.
Dashboards are available in the Web API through the dashboards resource. A dashboard
contains a list of dashboard items. An item can represent a single resource, like a chart, map or
report table, or represent a list of links to analytical resources, like reports, resources, tabular
reports and users. A dashboard item can contain up to eight links. Typically, a dashboard client
could choose to visualize the single-object items directly in a user interface, while rendering the
multi-object items as clickable links.
/api/dashboards
To get a list of your dashboards with basic information including identifier, name and link in JSON
format you can make a GET request to the following URL:
/api/[Link]
The dashboards resource will provide a list of dashboards. Remember that the dashboard object
is shared so the list will be affected by the currently authenticated user. You can retrieve more
information about a specific dashboard by following its link, similar to this:
149
1 Web API 1.41.1 Browsing dashboards
/api/dashboards/[Link]
A dashboard contains information like name and creation date and an array of dashboard items.
The response in JSON format will look similar to this response (certain information has been
removed for the sake of brevity).
{
"lastUpdated":
"2013-10-15T[Link].084+0000",
"id":
"vQFhmLJU5sK",
"created":
"2013-09-08T[Link].060+0000",
"name": "Mother
and Child
Health",
"href": "[Link]
dashboards/vQFhmLJU5sK",
"publicAccess":
"--------",
"externalAccess":
false,
"itemCount":
17,
"displayName":
"Mother and
Child Health",
"access":
{
"update":
true,
"externalize":
true,
"delete":
true,
"write":
true,
"read":
true,
"manage":
true
},
"user":
{
"id":
"xE7jOejl9FI",
"name":
"John
Traore",
"created":
"2013-04-18T[Link].407+0000",
"lastUpdated":
"2014-12-05T[Link].148+0000",
"href": "[Link]
users/xE7jOejl9FI"
},
"dashboardItems":
[
{
150
1 Web API 1.41.2 Searching dashboards
"id":
"bu1IAnPFa9H",
"created":
"2013-09-09T[Link].095+0000",
"lastUpdated":
"2013-09-09T[Link].095+0000"
},
{
"id":
"ppFEJmWWDa1",
"created":
"2013-09-10T[Link].480+0000",
"lastUpdated":
"2013-09-10T[Link].480+0000"
}
],
"userGroupAccesses":
[]
}
A more tailored response can be obtained by specifying specific fields in the request. An example
is provided below, which would return more detailed information about each object on a users
dashboard.
/api/dashboards/vQFhmLJU5sK/?fields=:all,dashboardItems[:all]
When a user is building a dashboard it is convenient to be able to search for various analytical
resources using the /dashboards/q resource. This resource lets you search for matches on the
name property of the following objects: charts, maps, report tables, users, reports and resources.
You can do a search by making a GET request on the following resource URL pattern, where my-
query should be replaced by the preferred search query:
/api/dashboards/q/[Link]
/api/dashboards/q/ma?count=6&maxCount=20&max=CHART&max=MAP
Query
Description Type Default
parameter
count The number of items of each Positive integer 6
type to return
maxCount The number of items of max Positive integer 25
types to return
151
1 Web API 1.41.2 Searching dashboards
Query
Description Type Default
parameter
max The type to return the String [CHART|MAP| N/A
maxCount for REPORT_TABLE|USER|
REPORT|RESOURCE|
VISUALIZATION]
JSON and XML response formats are supported. The response in JSON format will contain
references to matching resources and counts of how many matches were found in total and for
each type of resource. It will look similar to this:
"charts":
[
{
"name": "ANC: 1-3
dropout rate
Yearly",
"id":
"LW0O27b7TdD"
},
{
"name": "ANC: 1 and 3
coverage Yearly",
"id":
"UlfTKWZWV4u"
},
{
"name": "ANC: 1st and
3rd trends Monthly",
"id":
"gnROK20DfAA"
}
],
"visualizations":
[
{
"name": "ANC: ANC 3 Visits
Cumulative Numbers",
"id":
"arf9OiyV7df",
"type":
"LINE"
},
{
"name": "ANC: 1st and
2rd trends Monthly",
"id":
"jkf6OiyV7el",
"type":
"PIVOT_TABLE"
}
],
"maps":
[
{
"name": "ANC: 1st visit at
facility (fixed) 2013",
"id":
"YOEGBvxjAY0"
},
{
152
1 Web API 1.41.3 Creating, updating and removing dashboards
"name": "ANC: 3rd visit coverage
2014 by district",
"id":
"ytkZY3ChM6J"
}
],
"reportTables":
[
{
"name": "ANC: ANC 1 Visits
Cumulative Numbers",
"id":
"tWg9OiyV7mu"
}
],
"reports":
[
{
"name": "ANC: 1st Visit
Cumulative Chart",
"id":
"Kvg1AhYHM8Q"
},
{
"name": "ANC:
Coverages This
Year",
"id":
"qYVNH1wkZR0"
}
],
"searchCount":
8,
"chartCount":
3,
"mapCount":
2,
"reportTableCount":
1,
"reportCount":
2,
"userCount":
0,
"patientTabularReportCount":
0,
"resourceCount":
0
}
Creating, updating and deleting dashboards follow standard REST semantics. In order to create a
new dashboard you can make a POST request to the /api/dashboards resource. From a
consumer perspective it might be convenient to first create a dashboard and later add items to it.
JSON and XML formats are supported for the request payload. To create a dashboard with the
name “My dashboard” you can use a payload in JSON like this:
153
1 Web API 1.41.4 Adding, moving and removing dashboard items and content
{
"name": "My dashboard"
}
To update, e.g. rename, a dashboard, you can make a PUT request with a similar request payload
the same api/dashboards resource.
To remove a dashboard, you can make a DELETE request to the specific dashboard resource
similar to this:
/api/dashboards/vQFhmLJU5sK
Query
Description Options
parameter
type Type of the resource to be represented chart | visualization | map |
by the dashboard item reportTable | users | reports
| reportTables | resources |
patientTabularReports | app
id Identifier of the resource to be Resource identifier
represented by the dashboard item
A POST request URL for adding a chart to a specific dashboard could look like this, where the last
id query parameter value is the chart resource identifier:
/api/dashboards/vQFhmLJU5sK/items/content?type=chart&id=LW0O27b7TdD
When adding resource of type map, chart, report table and app, the API will create and add a new
item to the dashboard. When adding a resource of type users, reports, report tables and
resources, the API will try to add the resource to an existing dashboard item of the same type. If
no item of same type or no item of same type with less than eight resources associated with it
exists, the API will create a new dashboard item and add the resource to it.
In order to move a dashboard item to a new position within the list of items in a dashboard, a
consumer can make a POST request to the following resource URL, where <dashboard-id>
should be replaced by the identifier of the dashboard, <item-id> should be replaced by the
identifier of the dashboard item and <index> should be replaced by the new position of the item
in the dashboard, where the index is zero-based:
/api/dashboards/<dashboard-id>/items/<item-id>/position/<index>
154
1 Web API 1.42 Visualization
To remove a dashboard item completely from a specific dashboard a consumer can make a
DELETE request to the below resource URL, where <dashboard-id> should be replaced by the
identifier of the dashboard and <item-id> should be replaced by the identifier of the
dashboard item. The dashboard item identifiers can be retrieved through a GET request to the
dashboard resource URL.
/api/dashboards/<dashboard-id>/items/<item-id>
To remove a specific content resource within a dashboard item a consumer can make a DELETE
request to the below resource URL, where <content-resource-id> should be replaced by the
identifier of a resource associated with the dashboard item; e.g. the identifier of a report or a
user. For instance, this can be used to remove a single report from a dashboard item of type
reports, as opposed to removing the dashboard item completely:
/api/dashboards/<dashboard-id>/items/<item-id>/content/<content-resource-id>
1.42 Visualization
The Visualization API is designed to help clients to interact with charts and pivot/report tables.
The endpoints of this API are used by the Data Visualization application which allows the
creation, configuration and management of charts and pivot tables based on the client’s
definitions. The main idea is to enable clients and users to have a unique and centralized API
providing all types of charts and pivot tables as well as specific parameters and configuration for
each type of visualization.
This API was introduced with the expectation to unify both charts and reportTables APIs and
entirely replace them in favour of the visualizations API (which means that the usage of
charts and reportTables APIs should be avoided). In summary, the following resources/APIs:
/api/charts, /api/reportTables
/api/visualizations
Note
New applications and clients should avoid using the charts and
reportTables APIs because they are deprecated. Use the
visualizations API instead.
A Visualization object is composed of many attributes (some of them related to charts and others
related to pivot tables), but the most important ones responsible to reflect the core information
of the object are: “id”, “name”, “type”, “dataDimensionItems”, “columns”, “rows” and “filters”.
The root endpoint of the API is /api/visualizations, and the list of current attributes and
elements are described in the table below.
Visualization attributes
155
1 Web API 1.42 Visualization
Field Description
id The unique identifier.
code A custom code to identify the Visualization.
name The name of the Visualization
type The type of the Visualization. The valid types are:
COLUMN, STACKED_COLUMN, BAR, STACKED_BAR,
LINE, AREA, PIE, RADAR, GAUGE,
YEAR_OVER_YEAR_LINE YEAR_OVER_YEAR_COLUMN,
SINGLE_VALUE, PIVOT_TABLE.
title A custom title.
subtitle A custom subtitle.
description Defines a custom description for the Visualization.
created The date/time of the Visualization creation.
startDate The beginning date used during the filtering.
endDate The ending date used during the filtering.
sortOrder The sorting order of this Visualization. Integer value.
user An object representing the creator of the
Visualization.
publicAccess Sets the permissions for public access.
displayDensity The display density of the text.
fontSize The font size of the text.
relativePeriods An object representing the relative periods used in
the analytics query.
legendSet An object representing the definitions for the
legend.
legendDisplayStyle The legend’s display style. It can be: FILL or TEXT.
legendDisplayStrategy The legend’s display style. It can be: FIXED or
BY_DATA_ITEM.
aggregationType Determines how the values in the pivot table are
aggregated. Valid options: SUM, AVERAGE,
AVERAGE_SUM_ORG_UNIT, LAST,
LAST_AVERAGE_ORG_UNIT, FIRST,
FIRST_AVERAGE_ORG_UNIT, COUNT, STDDEV,
VARIANCE, MIN, MAX, NONE, CUSTOM or DEFAULT.
regressionType A valid regression type: NONE, LINEAR,
POLYNOMIAL or LOESS.
targetLineValue The chart target line. Accepts a Double type.
targetLineLabel The chart target line label.
rangeAxisLabel The chart vertical axis (y) label/title.
domainAxisLabel The chart horizontal axis (x) label/title.
rangeAxisMaxValue The chart axis maximum value. Values outside of
the range will not be displayed.
rangeAxisMinValue The chart axis minimum value. Values outside of the
range will not be displayed.
156
1 Web API 1.42 Visualization
Field Description
rangeAxisSteps The number of axis steps between the minimum
and maximum values.
rangeAxisDecimals The number of decimals for the axes values.
baseLineValue A chart baseline value.
baseLineLabel A chart baseline label.
digitGroupSeparator The digit group separator. Valid values: COMMA,
SPACE or NONE.
topLimit The top limit set for the Pivot table.
measureCriteria Describes the criteria applied to this measure.
percentStackedValues Uses stacked values or not. More likely to be applied
for graphics/charts. Boolean value.
noSpaceBetweenColumns Show/hide space between columns. Boolean value.
regression Indicates whether the Visualization contains
regression columns. More likely to be applicable to
Pivot/Report. Boolean value.
externalAccess Indicates whether the Visualization is available as
external read-only. Boolean value.
userOrganisationUnit Indicates if the user has an organisation unit.
Boolean value.
userOrganisationUnitChildren Indicates if the user has a children organisation unit.
Boolean value.
userOrganisationUnitGrandChildren Indicates if the user has a grand children
organisation unit . Boolean value.
reportingParams Object used to define boolean attributes related to
reporting.
rowTotals Displays (or not) the row totals. Boolean value.
colTotals Displays (or not) the columns totals. Boolean value.
rowSubTotals Displays (or not) the row sub-totals. Boolean value.
colSubTotals Displays (or not) the columns sub-totals. Boolean
value.
cumulativeValues Indicates whether the visualization is using
cumulative values. Boolean value.
hideEmptyColumns Indicates whether to hide columns with no data
values. Boolean value.
hideEmptyRows Indicates whether to hide rows with no data values.
Boolean value.
completedOnly Indicates whether to hide columns with no data
values. Boolean value.
skipRounding Apply or not rounding. Boolean value.
showDimensionLabels Shows the dimension labels or not. Boolean value.
hideTitle Hides the title or not. Boolean value.
hideSubtitle Hides the subtitle or not. Boolean value.
157
1 Web API 1.42.1 Retrieving visualizations
Field Description
hideLegend Show/hide the legend. Very likely to be used by
charts. Boolean value.
showHierarchy Displays (or not) the organisation unit hierarchy
names. Boolean value.
showData Used by charts to hide or not data/values within the
rendered model. Boolean value.
lastUpdatedBy Object that represents the user that applied the last
changes to the Visualization.
lastUpdated The date/time of the last time the Visualization was
changed.
favorites List of user ids who have marked this object as a
favorite.
subscribers List of user ids who have subscribed to this
Visualization.
translations Set of available object translation, normally filtered
by locale.
To retrieve a list of all existing visualizations, in JSON format, with some basic information
(including identifier, name and pagination) you can make a GET request to the URL below. You
should see a list of all public/shared visualizations plus your private ones.
GET /api/[Link]
If you want to retrieve the JSON definition of a specific Visualization you can add its respective
identifier to the URL:
GET /api/visualizations/[Link]
The following representation is an example of a response in JSON format (for brevity, certain
information has been removed). For the complete schema, please use GET /api/schemas/
visualization.
{
"lastUpdated":
"2020-02-06T[Link].678",
"href": "[Link]
visualizations/hQxZGXqnLS9",
"id":
"hQxZGXqnLS9",
"created":
"2017-05-19T[Link].785",
"name": "ANC: ANC 1st visits last 12 months
cumulative values",
"publicAccess":
"rw------",
"userOrganisationUnitChildren":
false,
158
1 Web API 1.42.1 Retrieving visualizations
"type":
"LINE",
"access":
{},
"reportingParams":
{
"parentOrganisationUnit":
false,
"reportingPeriod":
false,
"organisationUnit":
false,
"grandParentOrganisationUnit":
false
},
"dataElementGroupSetDimensions":
[],
"attributeDimensions":
[],
"yearlySeries":
[],
"filterDimensions":
["dx"],
"columns":
[
{
"id":
"ou"
}
],
"dataElementDimensions":
[],
"categoryDimensions":
[],
"rowDimensions":
["pe"],
"columnDimensions":
["ou"],
"dataDimensionItems":
[
{
"dataDimensionItemType":
"DATA_ELEMENT",
"dataElement":
{
"id":
"fbfJHSPpUQD"
}
}
],
"filters":
[
{
159
1 Web API 1.42.2 Creating, updating and removing visualizations
"id":
"dx"
}
],
"rows":
[
{
"id":
"pe"
}
]
}
A more tailored response can be obtained by specifying, in the URL, the fields you want to
extract. Ie.:
GET /api/visualizations/[Link]?fields=interpretations
will return
"interpretations":
[
{
"id":
"Lfr8I2RPU0C"
},
{
"id":
"JuwgdJlJPGb"
},
{
"id":
"WAoU2rSpyZp"
}
]
}
As seen, the GET above will return only the interpretations related to the given identifier (in this
case hQxZGXqnLS9).
These operations follow the standard REST semantics. A new Visualization can be created
through a POST request to the /api/visualizations resource with a valid JSON payload. An
example of payload could be:
"columns":
[
{
"dimension":
"J5jldMd8OHv",
"items":
[
160
1 Web API 1.42.2 Creating, updating and removing visualizations
"name":
"CHP",
"id":
"uYxK4wmcPqA",
"displayName":
"CHP",
"displayShortName":
"CHP",
"dimensionItemType":
"ORGANISATION_UNIT_GROUP"
},
"name":
"Hospital",
"id":
"tDZVQ1WtwpA",
"displayName":
"Hospital",
"displayShortName":
"Hospital",
"dimensionItemType":
"ORGANISATION_UNIT_GROUP"
}
]
}
],
"rows":
[
{
"dimension":
"SooXFOUnciJ",
"items":
[
"name":
"DOD",
"id":
"B0bjKC0szQX",
"displayName":
"DOD",
"displayShortName":
"DOD",
"dimensionItemType":
"CATEGORY_OPTION_GROUP"
},
"name":
"CDC",
"id":
"OK2Nr4wdfrZ",
"displayName":
"CDC",
"displayShortName":
"CDC",
161
1 Web API 1.42.2 Creating, updating and removing visualizations
"dimensionItemType":
"CATEGORY_OPTION_GROUP"
}
]
}
],
"filters":
[
{
"dimension":
"ou",
"items":
[
"name":
"Sierra
Leone",
"id":
"ImspTQPwCqd",
"displayName":
"Sierra
Leone",
"displayShortName":
"Sierra Leone",
"dimensionItemType":
"ORGANISATION_UNIT"
},
"name":
"LEVEL-1",
"id":
"LEVEL-
H1KlN4QIauv",
"displayName":
"LEVEL-1"
}
]
}
],
"name":
"HIV
Cases
Monthly",
"description": "Cases of HIV
across the months",
"category":
"XY1vwCQskjX",
"showDimensionLabels":
true,
"hideEmptyRows":
true,
"hideEmptyColumns":
true,
"skipRounding":
true,
"aggregationType":
"SUM",
162
1 Web API 1.42.2 Creating, updating and removing visualizations
"regressionType":
"LINEAR",
"type":
"PIVOT_TABLE",
"numberType":
"VALUE",
"measureCriteria":
"Some
criteria",
"showHierarchy":
true,
"completedOnly":
true,
"displayDensity":
"NORMAL",
"fontSize":
"NORMAL",
"digitGroupSeparator":
"SPACE",
"legendDisplayStyle":
"FILL",
"legendDisplayStrategy":
"FIXED",
"hideEmptyRowItems":
"BEFORE_FIRST_AFTER_LAST",
"regression":
false,
"cumulative":
true,
"sortOrder":
1,
"topLimit":
2,
"rowTotals":
true,
"colTotals":
true,
"hideTitle":
true,
"hideSubtitle":
true,
"hideLegend":
true,
"showData":
true,
"baseLineLabel":
"A
base
label",
"targetLineLabel":
"A target
label",
"targetLineValue":
45.5,
163
1 Web API 1.42.2 Creating, updating and removing visualizations
"baseLineValue":
19.99,
"percentStackedValues":
true,
"noSpaceBetweenColumns":
true,
"rowSubTotals":
true,
"colSubTotals":
true,
"domainAxisLabel": "A
domain axis
label",
"rangeAxisLabel":
"A range axis
label",
"rangeAxisMaxValue":
123.65,
"rangeAxisMinValue":
33.89,
"rangeAxisSteps":
5,
"rangeAxisDecimals":
10,
"userOrgUnitType":
"TEI_SEARCH",
"externalAccess":
false,
"publicAccess":
"--------",
"reportingParams":
{
"reportingPeriod":
true,
"organisationUnit":
true,
"parentOrganisationUnit":
true,
"grandParentOrganisationUnit":
true
},
"parentGraphMap":
{
"ImspTQPwCqd":
""
},
"access":
{
"read":
true,
"update":
true,
"externalize":
true,
164
1 Web API 1.42.2 Creating, updating and removing visualizations
"delete":
false,
"write":
true,
"manage":
false
},
"optionalAxes":
[
{
"dimensionalItem":
"fbfJHSPpUQD",
"axis":
1
},
{
"dimensionalItem":
"cYeuwXTCPkU",
"axis":
2
}
],
"relativePeriods":
{
"thisYear":
false,
"quartersLastYear":
true,
"last52Weeks":
false,
"thisWeek":
false,
"lastMonth":
false,
"last14Days":
false,
"biMonthsThisYear":
false,
"monthsThisYear":
false,
"last2SixMonths":
false,
"yesterday":
false,
"thisQuarter":
false,
"last12Months":
false,
"last5FinancialYears":
false,
"thisSixMonth":
false,
165
1 Web API 1.42.2 Creating, updating and removing visualizations
"lastQuarter":
false,
"thisFinancialYear":
false,
"last4Weeks":
false,
"last3Months":
false,
"thisDay":
false,
"thisMonth":
false,
"last5Years":
false,
"last6BiMonths":
false,
"last4BiWeeks":
false,
"lastFinancialYear":
false,
"lastBiWeek":
false,
"weeksThisYear":
false,
"last6Months":
false,
"last3Days":
false,
"quartersThisYear":
false,
"monthsLastYear":
false,
"lastWeek":
false,
"last7Days":
false,
"thisBimonth":
false,
"lastBimonth":
false,
"lastSixMonth":
false,
"thisBiWeek":
false,
"lastYear":
false,
"last12Weeks":
false,
"last4Quarters":
false
},
166
1 Web API 1.42.2 Creating, updating and removing visualizations
"user":
{},
"yearlySeries":
["THIS_YEAR"],
"userGroupAccesses":
[
{
"access":
"rwx-----",
"userGroupUid":
"ZoHNWQajIoe",
"displayName": "Bo
District M&E
officers",
"id":
"ZoHNWQajIoe"
}
],
"userAccesses":
[
{
"access":
"--------",
"displayName":
"John
Barnes",
"id":
"DXyJmlo9rge",
"userUid":
"DXyJmlo9rge"
}
],
"legendSet":
{
"name":
"Death
rate
up",
"id":
"ham2eIDJ9k6",
"legends":
[
{
"startValue":
1,
"endValue":
2,
"color":
"red",
"image":
"some-
image"
},
{
"startValue":
2,
"endValue":
3,
167
1 Web API 1.43 Analytics
"color":
"blue",
"image":
"other-
image"
}
]
}
}
To update a specific Visualization, you can send a PUT request to the same /api/
visualizations resource with a similar payload PLUS the respective Visualization’s identifier,
ie.:
PUT /api/visualizations/hQxZGXqnLS9
Finally, to delete an existing Visualization, you can make a DELETE request specifying the
identifier of the Visualization to be removed, as shown:
DELETE /api/visualizations/hQxZGXqnLS9
1.43 Analytics
To access analytical, aggregated data in DHIS2 you can work with the analytics resource. The
analytics resource is powerful as it lets you query and retrieve data aggregated along all available
data dimensions. For instance, you can ask the analytics resource to provide the aggregated data
values for a set of data elements, periods and organisation units. Also, you can retrieve the
aggregated data for a combination of any number of dimensions based on data elements and
organisation unit group sets.
/api/33/analytics
Query parameters
168
1 Web API 1.43.1 Request query parameters
169
1 Web API 1.43.1 Request query parameters
170
1 Web API 1.43.1 Request query parameters
171
1 Web API 1.43.1 Request query parameters
172
1 Web API 1.43.1 Request query parameters
The dimension query parameter defines which dimensions should be included in the analytics
query. Any number of dimensions can be specified. The dimension parameter should be
repeated for each dimension to include in the query response. The query response can
potentially contain aggregated values for all combinations of the specified dimension items.
The filter parameter defines which dimensions should be used as filters for the data retrieved in
the analytics query. Any number of filters can be specified. The filter parameter should be
repeated for each filter to use in the query. A filter differs from a dimension in that the filter
dimensions will not be part of the query response content, and that the aggregated values in the
response will be collapsed on the filter dimensions. In other words, the data in the response will
be aggregated on the filter dimensions, but the filters will not be included as dimensions in the
actual response. As an example, to query for certain data elements filtered by the periods and
organisation units you can use the following URL:
/api/33/analytics?dimension=dx:fbfJHSPpUQD;cYeuwXTCPkU&filter=pe:2014Q1;2014Q2
&filter=ou:O6uvpzGd5pu;lc3eMKXaEfw
The aggregationType query parameter lets you define which aggregation operator should be
used for the query. By default, the aggregation operator defined for data elements included in
the query will be used. If your query does not contain any data elements but does include data
element groups, the aggregation operator of the first data element in the first group will be used.
The order of groups and data elements is undefined. This query parameter allows you to
override the default and specify a specific aggregation operator. As an example, you can set the
aggregation operator to “count” with the following URL:
/api/33/analytics?dimension=dx:fbfJHSPpUQD&dimension=pe:2014Q1&dimension=ou:O6uvpzGd5pu
&aggregationType=COUNT
173
1 Web API 1.43.2 Dimensions and items
The measureCriteria query parameter lets you filter out ranges of data records to return. You can
instruct the system to return only records where the aggregated data value is equal, greater than,
greater or equal, less than or less or equal to certain values. You can specify any number of
criteria on the following format, where criteria and value should be substituted with real values:
/api/33/analytics?measureCriteria=criteria:value;criteria:value
As an example, the following query will return only records where the data value is greater or
equal to 6500 and less than 33000:
/api/33/analytics?dimension=dx:fbfJHSPpUQD;cYeuwXTCPkU&dimension=pe:2014
&dimension=ou:O6uvpzGd5pu;lc3eMKXaEfw&measureCriteria=GE:6500;LT:33000
/api/33/analytics?dimension=dx:fbfJHSPpUQD;cYeuwXTCPkU&dimension=pe:2014
&dimension=ou:O6uvpzGd5pu;lc3eMKXaEfw&preAggregationMeasureCriteria=GE:10;LT:100
The startDate and endDate parameters can be used to specify a custom date range to aggregate
over. When specifying a date range you can not specify relative nor fixed periods as dimension or
filter. The date range will filter the analytics response. You can use it like this:
/api/33/[Link]?dimension=dx:fbfJHSPpUQD;cYeuwXTCPkU
&dimension=ou:ImspTQPwCqd&startDate=2018-01-01&endDate=2018-06-01
In order to have the analytics resource generate the data in the shape of a ready-made table, you
can provide the tableLayout parameter with true as value. Instead of generating a plain,
normalized data source, the analytics resource will now generate the data in a table layout. You
can use the columns and rows parameters with dimension identifiers separated by semi-colons
as values to indicate which ones to use as table columns and rows. The column and rows
dimensions must be present as a data dimension in the query (not a filter). Such a request can
look like this:
/api/33/[Link]?dimension=dx:fbfJHSPpUQD;cYeuwXTCPkU&dimension=pe:2014Q1;2014Q2
&dimension=ou:O6uvpzGd5pu&tableLayout=true&columns=dx;ou&rows=pe
The order parameter can be used for analytics resource to generate ordered data. The data will
be ordered in ascending (or descending) order of values. An example request for ordering the
values in descending order is:
/api/33/analytics?dimension=dx:fbfJHSPpUQD&dimension=pe:LAST_12_MONTHS
&dimension=ou:O6uvpzGd5pu&order=DESC
DHIS2 features a multi-dimensional data model with several fixed and dynamic data dimensions.
The fixed dimensions are the data element, period (time) and organisation unit dimension. You
174
1 Web API 1.43.2 Dimensions and items
can dynamically add dimensions through categories, data element group sets and organisation
unit group sets. The table below displays the available data dimensions in DHIS2. Each data
dimension has a corresponding dimension identifier, and each dimension can have a set of
dimension items:
Dimension
Dimension Dimension items
id
Data elements, indicators, dx Data element, indicator, data set
data set reporting rate reporting rate metrics, data element
metrics, data element operand, program indicator, program
operands, program indicators, attribute identifiers, keyword DE_GROUP-
program data elements, <group-id>, IN_GROUP-<group-id>, use
program attributes, validation <dataelement-id>.<optioncombo-id> for
rules data element operands, <program-
id>.<dataelement-id> for program data
elements, <program-id>.<attribute-id> for
program attributes, <validationrule-id> for
validation results.
Periods (time) pe ISO periods and relative periods, see “date
and period format”
Organisation unit hierarchy ou Organisation unit identifiers, and
keywords USER_ORGUNIT,
USER_ORGUNIT_CHILDREN,
USER_ORGUNIT_GRANDCHILDREN, LEVEL-
<level> and OU_GROUP-<group-id>
Category option combinations co Category option combo identifiers (omit
to get all items)
Attribute option combinations ao Category option combo identifiers (omit
to get all items)
Categories <category Category option identifiers (omit to get all
id> items)
Data element group sets <group set Data element group identifiers (omit to
id> get all items)
Organisation unit group sets <group set Organisation unit group identifiers (omit
id> to get all items)
Category option group sets <group set Category option group identifiers (omit to
id> get all items)
It is not necessary to be aware of which objects are used for the various dynamic dimensions
when designing analytics queries. You can get a complete list of dynamic dimensions by visiting
this URL in the Web API:
/api/33/dimensions
The base URL to the analytics resource is /api/analytics. To request specific dimensions and
dimension items you can use a query string on the following format, where dim-id and dim-
item should be substituted with real values:
175
1 Web API 1.43.2 Dimensions and items
/api/33/analytics?dimension=dim-id:dim-item;dim-item&dimension=dim-id:dim-item;dim-item
As illustrated above, the dimension identifier is followed by a colon while the dimension items
are separated by semi-colons. As an example, a query for two data elements, two periods and
two organisation units can be done with the following URL:
/api/33/analytics?dimension=dx:fbfJHSPpUQD;cYeuwXTCPkU
&dimension=pe:2016Q1;2016Q2&dimension=ou:O6uvpzGd5pu;lc3eMKXaEfw
To query for data broken down by category option combinations instead of data element totals
you can include the category dimension in the query string, for instance like this:
/api/33/analytics?dimension=dx:fbfJHSPpUQD;cYeuwXTCPkU
&dimension=co&dimension=pe:201601&dimension=ou:O6uvpzGd5pu;lc3eMKXaEfw
When selecting data elements you can also select all data elements in a group as items by using
the DE_GROUP- syntax:
/api/33/analytics?dimension=dx:DE_GROUP-h9cuJOkOwY2
&dimension=pe:201601&dimension=ou:O6uvpzGd5pu
When selecting data set reporting rates, the syntax contains a data set identifier followed by a
reporting rate metric:
/api/33/analytics?dimension=dx:BfMAe6Itzgt.REPORTING_RATE;BfMAe6Itzgt.ACTUAL_REPORTS
&dimension=pe:201601&dimension=ou:O6uvpzGd5pu
To query for program data elements (of tracker domain type) you can get those by specifying the
program for each data element using the . syntax:
/api/33/[Link]?dimension=dx:eBAyeGv0exc.qrur9Dvnyt5;eBAyeGv0exc.GieVkTxp4HH
&dimension=pe:LAST_12_MONTHS&filter=ou:ImspTQPwCqd
To query for program attributes (tracked entity attributes) you can get those by specifying the
program for each attribute using the <[Link]>. syntax:
/api/33/[Link]?dimension=dx:IpHINAT79UW.a3kGcGDCuk6;IpHINAT79UW.UXz7xuGCEhU
&dimension=pe:LAST_4_QUARTERS&dimension=ou:ImspTQPwCqd
To query for organisation unit group sets and data elements you can use the following URL.
Notice how the group set identifier is used as a dimension identifier and the groups as dimension
items:
/api/33/analytics?dimension=Bpx0589u8y0:oRVt7g429ZO;MAs88nJc9nL
&dimension=pe:2016&dimension=ou:ImspTQPwCqd
176
1 Web API 1.43.2 Dimensions and items
To query for data elements and categories you can use this URL. Use the category identifier as a
dimension identifier and the category options as dimension items:
/api/33/analytics?dimension=dx:s46m5MS0hxu;fClA2Erf6IO&dimension=pe:2016
&dimension=YNZyaJHiHYq:btOyqprQ9e8;GEqzEKCHoGA&filter=ou:ImspTQPwCqd
To query using relative periods and organisation units associated with the current user you can
use a URL like this:
/api/33/analytics?dimension=dx:fbfJHSPpUQD;cYeuwXTCPkU
&dimension=pe:LAST_12_MONTHS&dimension=ou:USER_ORGUNIT
When selecting organisation units for a dimension you can select an entire level optionally
constrained by any number of boundary organisation units with the LEVEL-<level> syntax.
Boundary refers to a top node in a sub-hierarchy, meaning that all organisation units at the given
level below the given boundary organisation unit in the hierarchy will be included in the
response, and is provided as regular organisation unit dimension items. The level value can
either be a numerical level or refer to the identifier of the organisation unit level entity. A simple
query for all org units at level three:
/api/33/analytics?dimension=dx:fbfJHSPpUQD&dimension=pe:2016&dimension=ou:LEVEL-3
A query for level three and four with two boundary org units can be specified like this:
/api/33/analytics?dimension=dx:fbfJHSPpUQD&dimension=pe:2016
&dimension=ou:LEVEL-3;LEVEL-4;O6uvpzGd5pu;lc3eMKXaEf
When selecting organisation units you can also select all organisation units in an organisation
unit group to be included as dimension items using the OU_GROUP- syntax. The organisation
units in the groups can optionally be constrained by any number of boundary organisation units.
Both the level and the group items can be repeated any number of times:
/api/33/analytics?dimension=dx:fbfJHSPpUQD&dimension=pe:2016
&dimension=ou:OU_GROUP-w0gFTTmsUcF;OU_GROUP-EYbopBOJWsW;O6uvpzGd5pu;lc3eMKXaEf
You can utilize identifier schemes for the metadata part of the analytics response with the
outputIdScheme property like this. You can use ID, code and attributes as identifier scheme:
/api/33/analytics?dimension=dx:fbfJHSPpUQD;cYeuwXTCPkU
&dimension=pe:2017Q1;2017Q2&dimension=ou:O6uvpzGd5pu&outputIdScheme=CODE
A few things to be aware of when using the analytics resource are listed below.
• Data elements, indicator, data set reporting rates, program data elements and program
indicators are part of a common data dimension, identified as “dx”. This means that you
can use any of data elements, indicators and data set identifiers together with the “dx”
dimension identifier in a query.
177
1 Web API 1.43.3 The dx dimension
For the category, data element group set and organisation unit group set dimensions, all
• dimension items will be used in the query if no dimension items are specified.
• For the period dimension, the dimension items are ISO period identifiers and/or relative
periods. Please refer to the section above called “Date and period format” for the period
format and available relative periods.
• For the organisation unit dimension, you can specify the items to be the organisation unit
or sub-units of the organisation unit associated with the user currently authenticated for
the request using the keys USER_ORGUNIT or USER_ORGUNIT_CHILDREN as items,
respectively. You can also specify organisation unit identifiers directly, or a combination of
both.
• For the organisation unit dimension, you can specify the organisation hierarchy level and
the boundary unit to use for the request on the format LEVEL-<level>-<boundary-id>;
as an example LEVEL-3-ImspTQPwCqd implies all organisation units below the given
boundary unit at level 3 in the hierarchy.
• For the organisation unit dimension, the dimension items are the organisation units and
their sub-hierarchy - data will be aggregated for all organisation units below the given
organisation unit in the hierarchy.
• You cannot specify dimension items for the category option combination dimension.
Instead, the response will contain the items which are linked to the data values.
The dx dimension is a special dimension which can contain all of the following data types.
178
1 Web API 1.43.3 The dx dimension
Items from all of the various dx types can be combined in an analytics request. An example looks
like this:
/api/33/[Link]
?dimension=dx:Uvn6LCg7dVU;BfMAe6Itzgt.REPORTING_RATE;IpHINAT79UW.a3kGcGDCuk6
&dimension=pe:LAST_12_MONTHS&filter=ou:ImspTQPwCqd
The group syntax can be used together with any other item as well. An example looks like this:
/api/33/[Link]
?dimension=dx:DE_GROUP-qfxEYY9xAl6;IN_GROUP-oehv9EO3vP7;BfMAe6Itzgt.REPORTING_RATE
&dimension=pe:LAST_12_MONTHS&filter=ou:ImspTQPwCqd
Data element operands can optionally specify attribute option combinations and use wildcards
e.g. to specify all category option combination values:
179
1 Web API 1.43.4 Response formats
/api/33/[Link]
?dimension=dx:Uvn6LCg7dVU.*.j8vBiBqGf6O;Uvn6LCg7dVU.Z4oQs46iTeR
&dimension=pe:LAST_12_MONTHS&filter=ou:ImspTQPwCqd
Tip
A great way to learn how to use the analytics API is to use the DHIS2
pivot table app. You can play around with pivot tables using the various
dimensions and items and click Download > Plain data source > JSON to
see the resulting analytics API calls in the address bar of your Web
browser.
The analytics response containing aggregate data can be returned in various representation
formats. As usual, you can indicate interest in a specific format by appending a file extension to
the URL, through the Accept HTTP header or through the format query parameter. The default
format is JSON. The available formats and content-types are listed below.
• json (application/json)
• jsonp (application/javascript)
• xml (application/xml)
• csv (application/csv)
• html (text/html)
• html+css (text/html)
• xls (application/[Link]-excel)
As an example, to request an analytics response in XML format you can use the following URL:
/api/33/[Link]?dimension=dx:fbfJHSPpUQD
&dimension=pe:2016&dimension=ou:O6uvpzGd5pu;lc3eMKXaEfw
The analytics responses must be retrieved using the HTTP GET method. This allows for direct
linking to analytics responses from Web pages as well as other HTTP-enabled clients. To do
functional testing we can use the cURL library. By executing this command against the demo
database you will get an analytics response in JSON format:
curl "[Link]/demo/api/[Link]?
dimension=dx:eTDtyyaSA7f;FbKK4ofIv5R
&dimension=pe:2016Q1;2016Q2&filter=ou:ImspTQPwCqd" -u
admin:district
"headers":
[
{
180
1 Web API 1.43.4 Response formats
"name":
"dx",
"column":
"Data",
"meta":
true,
"type":
"[Link]"
},
{
"name":
"pe",
"column":
"Period",
"meta":
true,
"type":
"[Link]"
},
{
"name":
"value",
"column":
"Value",
"meta":
false,
"type":
"[Link]"
}
],
"height":
4,
"metaData":
{
"pe":
["2016Q1",
"2016Q2"],
"ou":
["ImspTQPwCqd"],
"names":
{
"2016Q1":
"Jan
to Mar
2016",
"2016Q2":
"Apr
to Jun
2016",
"FbKK4ofIv5R":
"Measles Coverage
<1 y",
"ImspTQPwCqd":
"Sierra
Leone",
"eTDtyyaSA7f": "Fully
Immunized Coverage"
}
},
"rows":
[
181
1 Web API 1.43.5 Constraints and validation
["eTDtyyaSA7f",
"2016Q2",
"81.1"],
["eTDtyyaSA7f",
"2016Q1",
"74.7"],
["FbKK4ofIv5R",
"2016Q2",
"88.9"],
["FbKK4ofIv5R",
"2016Q1",
"84.0"]
],
"width":
3
}
The response represents a table of dimensional data. The headers array gives an overview of
which columns are included in the table and what the columns contain. The column property
shows the column dimension identifier, or if the column contains measures, the word “Value”.
The meta property is true if the column contains dimension items or false if the column contains
a measure (aggregated data values). The name property is similar to the column property, except
it displays “value” in case the column contains a measure. The type property indicates the Java
class type of column values.
The height and width properties indicate how many data columns and rows are contained in the
response, respectively.
The metaData periods property contains a unique, ordered array of the periods included in the
response. The metaData ou property contains an array of the identifiers of organisation units
included in the response. The metaData names property contains a mapping between the
identifiers used in the data response and the names of the objects they represent. It can be used
by clients to substitute the identifiers within the data response with names in order to give a
more meaningful view of the data table.
The rows array contains the dimensional data table. It contains columns with dimension items
(object or period identifiers) and a column with aggregated data values. The example response
above has a data/indicator column, a period column and a value column. The first column
contains indicator identifiers, the second contains ISO period identifiers and the third contains
aggregated data values.
There are several constraints to the input parameters you can provide to the analytics resource.
If any of the constraints are violated, the API will return a 409 Conflict response and a response
message looking similar to this:
"httpStatus":
"Conflict",
"httpStatusCode":
409,
"status":
"ERROR",
"message": "Only a single indicator can be
specified as filter",
182
1 Web API 1.43.6 Data value set format
"errorCode":
"E7108"
}
The httpStatus and httpStatusCode fields indicate the HTTP status and status code per the
HTTP specification. The messsage field provides a human-readable description of the validation
error. The errorCode field provides a machine-readable code which can be used by clients to
handle validation errors. The possible validation errors for the aggregate analytics API are
described in the table below.
Error
Message
code
E7100 Query parameters cannot be null
E7101 At least one dimension must be specified
E7102 At least one data dimension item or data element group set dimension item must
be specified
E7103 Dimensions cannot be specified as dimension and filter simultaneously
E7104 At least one period as dimension or filter, or start and dates, must be specified
E7105 Periods and start and end dates cannot be specified simultaneously
E7106 Start date cannot be after end date
E7107 Start and end dates cannot be specified for reporting rates
E7108 Only a single indicator can be specified as filter
E7109 Only a single reporting rate can be specified as filter
E7110 Category option combos cannot be specified as filter
E7111 Dimensions cannot be specified more than once
E7112 Reporting rates can only be specified together with dimensions of type
E7113 Assigned categories cannot be specified when data elements are not specified
E7114 Assigned categories can only be specified together with data elements, not
indicators or reporting rates
E7115 Data elements must be of a value and aggregation type that allow aggregation
E7116 Indicator expressions cannot contain cyclic references
E7117 A data dimension ‘dx’ must be specified when output format is DATA_VALUE_SET
E7118 A period dimension ‘pe’ must be specified when output format is
DATA_VALUE_SET
E7119 An organisation unit dimension ‘ou’ must be specified when output format is
DATA_VALUE_SET
The analytics dataValueSet resource allows for returning aggregated data in the data value set
format. This format represents raw data values, as opposed to data which has been aggregated
along various dimensions. Exporting aggregated data as regular data values is useful for data
exchange between systems when the target system contains data of finer granularity compared
to what the destination system is storing.
As an example, one can specify an indicator in the target system to summarize data for multiple
data elements and import this data for a single data element in the destination system. As
183
1 Web API 1.43.7 Raw data format
another example, one can aggregate data collected at organisation unit level 4 in the target
system to level 2 and import that data in the destination system.
You can retrieve data in the raw data value set format from the dataValueSet resource:
/api/33/analytics/dataValueSet
• json (application/json)
• xml (application/xml)
When using the data value set format, exactly three dimensions must be specified as analytics
dimensions with at least one dimension item each:
• Data (dx)
• Period (pe)
Any other dimension will be ignored. Filters will be applied as with regular analytics requests.
Note that any data dimension type can be specified, including indicators, data elements, data
element operands, data sets and program indicators.
An example request which aggregates data for specific indicators, periods and organisation units
and returns it as regular data values in XML looks like this:
api/analytics/[Link]?dimension=dx:Uvn6LCg7dVU;OdiHJayrsKo
&dimension=pe:LAST_4_QUARTERS&dimension=ou:lc3eMKXaEfw;PMa2VCrupOd
A request which aggregates data for data element operands and uses CODE as output identifier
scheme looks like the below. When defining the output identifier scheme, all metadata objects
part of the response are affected:
api/analytics/[Link]?dimension=dx:fbfJHSPpUQD.pq2XI5kz2BY;fbfJHSPpUQD.PT59n8BQbqM
&dimension=pe:LAST_12_MONTHS&dimension=ou:ImspTQPwCqd&outputIdScheme=CODE
When using attribute-based identifier schemes for export there is a risk of producing duplicate
data values. The boolean query parameter duplicatesOnly can be used for debugging purposes
to return only duplicates data values. This response can be used to clean up the duplicates:
api/analytics/[Link]?dimension=dx:Uvn6LCg7dVU;OdiHJayrsKo
&dimension=pe:LAST_4_QUARTERS&dimension=ou:lc3eMKXaEfw&duplicatesOnly=true
The analytics rawData resource allows for returning the data stored in the analytics data tables
without any aggregation being performed. This is useful for clients which would like to perform
aggregation and filtering on their own without having to denormalize data in the available data
dimensions themselves.
184
1 Web API 1.43.7 Raw data format
/api/analytics/rawData
• json (application/json)
• csv (application/csv)
This resource follows the syntax of the regular analytics resource. Only a subset of the query
parameters are supported. Additionally, a startDate and endDate parameter are available. The
supported parameters are listed in the table below.
Query parameters
The dimension query parameter defines which dimensions (table columns) should be included in
the response. It can optionally be constrained with items. The filter query parameter defines
which items and dimensions (table columns) should be used as a filter for the response.
For the organisation unit dimension, the response will contain data associated with the
organisation unit and all organisation units in the sub-hierarchy (children in the tree). This is
different compared to the regular analytics resource, where only the explicitly selected
organisation units are included.
To retrieve a response with specific data elements, specific periods, specific organisation units
and all data for two custom dimensions you can issue a request like this:
/api/analytics/[Link]?dimension=dx:fbfJHSPpUQD;cYeuwXTCPkU;Jtf34kNZhzP
&dimension=J5jldMd8OHv&dimension=Bpx0589u8y0
&dimension=pe:LAST_12_MONTHS
&dimension=ou:O6uvpzGd5pu;fdc6uOvgoji
The startDate and endDate parameters allow for fetching data linked to any period between
those dates. This avoids the need for defining all periods explicitly in the request:
/api/analytics/[Link]?dimension=dx:fbfJHSPpUQD;cYeuwXTCPkU;Jtf34kNZhzP
&dimension=J5jldMd8OHv&dimension=Bpx0589u8y0
185
1 Web API 1.43.8 Debugging
&startDate=2015-01-01&endDate=2015-12-31
&dimension=ou:O6uvpzGd5pu;fdc6uOvgoji
The filter parameter can be used to filter a response without including that dimension as part of
the response, this time in CSV format:
/api/analytics/[Link]?dimension=dx:fbfJHSPpUQD;cYeuwXTCPkU;Jtf34kNZhzP
&filter=J5jldMd8OHv:uYxK4wmcPqA;tDZVQ1WtwpA
&startDate=2015-01-01&endDate=2015-12-31
&dimension=ou:O6uvpzGd5pu
The outputIdScheme parameter is useful if you want human readable data responses as it can be
set to NAME like this:
/api/analytics/[Link]?dimension=dx:fbfJHSPpUQD;cYeuwXTCPkU
&filter=J5jldMd8OHv:uYxK4wmcPqA;tDZVQ1WtwpA
&startDate=2017-01-01&endDate=2017-12-31
&dimension=ou:O6uvpzGd5pu
&outputIdScheme=NAME
The response from the rawData resource will look identical to the regular analytics resource; the
difference is that the response contains raw, non-aggregated data, suitable for further
aggregation by third-party systems.
1.43.8 Debugging
When debugging analytics requests it can be useful to examine the data value source of the
aggregated analytics response. The analytics/debug/sql resource will provide an SQL statement
that returns the relevant content of the datavalue table. You can produce this SQL by doing a GET
request with content type “text/html” or “text/plain” like below. The dimension and filter syntax
are identical to regular analytics queries:
/api/analytics/debug/sql?dimension=dx:fbfJHSPpUQD;cYeuwXTCPkU
&filter=pe:2016Q1;2016Q2&filter=ou:O6uvpzGd5pu
The event analytics API lets you access aggregated event data and query events captured in
DHIS2. This resource lets you retrieve events based on a program and optionally a program
stage, and lets you retrieve and filter events on any event dimensions.
/api/33/analytics/events
Event dimensions include data elements, attributes, organisation units and periods. The
aggregated event analytics resource will return aggregated information such as counts or
averages. The query analytics resource will simply return events matching a set of criteria and
does not perform any aggregation. You can specify dimension items in the form of options from
option sets and legends from legend sets for data elements and attributes which are associated
with such. The event dimensions are listed in the table below.
186
1 Web API 1.44.2 Request query parameters
Event dimensions
Dimension
Dimension Description
id
Data elements <id> Data element identifiers
Attributes <id> Attribute identifiers
Periods pe ISO periods and relative periods, see “date and
period format”
Organisation units ou Organisation unit identifiers and keywords
USER_ORGUNIT, USER_ORGUNIT_CHILDREN,
USER_ORGUNIT_GRANDCHILDREN, LEVEL-<level>
and OU_GROUP-<group-id>
Organisation unit <org unit Organisation unit group set identifiers
group sets group set
id>
Categories <category Category identifiers (program attribute categories
id> only)
The analytics event API lets you specify a range of query parameters.
Options
Query parameter Required Description
(default first)
program Yes Program identifier. Any program
identifier
stage No Program stage identifier. Any program
stage
identifier
startDate Yes Start date for events. Date in yyyy-
MM-dd
format
endDate Yes End date for events. Date in yyyy-
MM-dd
format
dimension Yes Dimension identifier including data Operators
elements, attributes, program can be EQ |
indicators, periods, organisation units GT | GE | LT
and organisation unit group sets. | LE | NE |
Parameter can be repeated any LIKE | IN
number of times. Item filters can be
applied to a dimension on the format
<item-id>:<operator>:<filter>. Filter
values are case-insensitive.
187
1 Web API 1.44.2 Request query parameters
Options
Query parameter Required Description
(default first)
filter No Dimension identifier including data
elements, attributes, periods,
organisation units and organisation
unit group sets. Parameter can be
repeated any number of times. Item
filters can be applied to a dimension
on the format <item-
id>:<operator>:<filter>. Filter values
are case-insensitive.
hierarchyMeta No Include names of organisation unit false | true
ancestors and hierarchy paths of
organisation units in the metadata.
eventStatus No Specify status of events to include. ACTIVE |
COMPLETED
| SCHEDULE
| OVERDUE |
SKIPPED
programStatus No Specify enrollment status of events to ACTIVE |
include. COMPLETED
| CANCELLED
relativePeriodDate string No Date
identifier e.g:
“2016-01-01”.
Overrides the
start date of
the relative
period
columns No Dimensions to use as columns for Any
table layout. dimension
(must be
query
dimension)
rows No Dimensions to use as rows for table Any
layout. dimension
(must be
query
dimension)
Query
Required Description Options
parameter
ouMode No The mode of selecting organisation DESCENDANTS,
units. Default is DESCENDANTS, CHILDREN,
meaning all sub units in the hierarchy. SELECTED
CHILDREN refers to immediate
children in the hierarchy; SELECTED
refers to the selected organisation
units only.
188
1 Web API 1.44.2 Request query parameters
Query
Required Description Options
parameter
asc No Dimensions to be sorted ascending, EVENTDATE |
can reference event date, org unit OUNAME |
name and code and any item OUCODE |
identifiers. item identifier
desc No Dimensions to be sorted descending, EVENTDATE |
can reference event date, org unit OUNAME |
name and code and any item OUCODE |
identifiers. item identifier
coordinatesOnly No Whether to only return events which false | true
have coordinates.
dataIdScheme No Id scheme to be used for data, more NAME | CODE
specifically data elements and | UID
attributes which have an option set or
legend set, e.g. return the name of the
option instead of the code, or the
name of the legend instead of the
legend ID, in the data response.
page No The page number. Default page is 1. Numeric
positive value
pageSize No The page size. Default size is 50 items Numeric zero
per page. or positive
value
189
1 Web API 1.44.2 Request query parameters
190
1 Web API 1.44.3 Event query analytics
The analytics/events/query resource lets you query for captured events. This resource does not
perform any aggregation, rather it lets you query and filter for information about events.
191
1 Web API 1.44.3 Event query analytics
/api/33/analytics/events/query
You can specify any number of dimensions and any number of filters in a query. Dimension item
identifiers can refer to any of data elements, person attributes, person identifiers, fixed and
relative periods and organisation units. Dimensions can optionally have a query operator and a
filter. Event queries should be on the format described below.
/api/33/analytics/events/query/<program-id>?startDate=yyyy-MM-dd&endDate=yyyy-MM-dd
&dimension=ou:<ou-id>;<ou-id>&dimension=<item-id>&dimension=<item-id>:<operator>:<filter>
For example, to retrieve events from the “Inpatient morbidity and mortality” program between
January and October 2016, where the “Gender” and “Age” data elements are included and the
“Age” dimension is filtered on “18”, you can use the following query:
/api/33/analytics/events/query/eBAyeGv0exc?startDate=2016-01-01&endDate=2016-10-31
&dimension=ou:O6uvpzGd5pu;fdc6uOvgoji&dimension=oZg33kd9taw&dimension=qrur9Dvnyt5:EQ:18
To retrieve events for the “Birth” program stage of the “Child programme” program between
March and December 2016, where the “Weight” data element, filtered for values larger than
2000:
/api/33/analytics/events/query/IpHINAT79UW?stage=A03MvHHogjR&startDate=2016-03-01
&endDate=2016-12-31&dimension=ou:O6uvpzGd5pu&dimension=UXz7xuGCEhU:GT:2000
Sorting can be applied to the query for the event date of the event and any dimensions. To sort
descending on the event date and ascending on the “Age” data element dimension you can use:
/api/33/analytics/events/query/eBAyeGv0exc?startDate=2016-01-01&endDate=2016-10-31
&dimension=ou:O6uvpzGd5pu&dimension=qrur9Dvnyt5&desc=EVENTDATE&asc=qrur9Dvnyt5
Paging can be applied to the query by specifying the page number and the page size parameters.
If page number is specified but page size is not, a page size of 50 will be used. If page size is
specified but page number is not, a page number of 1 will be used. To get the third page of the
response with a page size of 20 you can use a query like this:
/api/33/analytics/events/query/eBAyeGv0exc?startDate=2016-01-01&endDate=2016-10-31
&dimension=ou:O6uvpzGd5pu&dimension=qrur9Dvnyt5&page=3&pageSize=20
[Link] Filtering
Filters can be applied to data elements, person attributes and person identifiers. The filtering is
done through the query parameter value on the following format:
&dimension=<item-id>:<operator>:<filter-value>
As an example, you can filter the “Weight” data element for values greater than 2000 and lower
than 4000 like this:
192
1 Web API 1.44.3 Event query analytics
&dimension=UXz7xuGCEhU:GT:2000&dimension=UXz7xuGCEhU:LT:4000
You can filter the “Age” data element for multiple, specific ages using the IN operator like this:
&dimension=qrur9Dvnyt5:IN:18;19;20
You can specify multiple filters for a given item by repeating the operator and filter components,
all separated with semi-colons:
&dimension=qrur9Dvnyt5:GT:5:LT:15
Filter operators
Operator Description
EQ Equal to
GT Greater than
GE Greater than or equal to
LT Less than
LE Less than or equal to
NE Not equal to
LIKE Like (free text match)
IN Equal to one of multiple values separated by “;”
The default response representation format is JSON. The requests must be using the HTTP GET
method. The following response formats are supported.
• json (application/json)
• jsonp (application/javascript)
• xls (application/[Link]-excel)
As an example, to get a response in Excel format you can use a file extension in the request URL
like this:
/api/33/analytics/events/query/[Link]?startDate=2016-01-01&endDate=2016-10-31
&dimension=ou:O6uvpzGd5pu&dimension=oZg33kd9taw&dimension=qrur9Dvnyt5
You can set the hierarchyMeta query parameter to true in order to include names of all ancestor
organisation units in the meta-section of the response:
/api/33/analytics/events/query/eBAyeGv0exc?startDate=2016-01-01&endDate=2016-10-31
&dimension=ou:YuQRtpLP10I&dimension=qrur9Dvnyt5:EQ:50&hierarchyMeta=true
193
1 Web API 1.44.3 Event query analytics
"headers":
[
{
"name":
"psi",
"column":
"Event",
"type":
"[Link]",
"hidden":
false,
"meta":
false
},
{
"name":
"ps",
"column":
"Program
stage",
"type":
"[Link]",
"hidden":
false,
"meta":
false
},
{
"name":
"eventdate",
"column":
"Event
date",
"type":
"[Link]",
"hidden":
false,
"meta":
false
},
{
"name":
"coordinates",
"column":
"Coordinates",
"type":
"[Link]",
"hidden":
false,
"meta":
false
},
{
194
1 Web API 1.44.3 Event query analytics
"name":
"ouname",
"column":
"Organisation
unit name",
"type":
"[Link]",
"hidden":
false,
"meta":
false
},
{
"name":
"oucode",
"column":
"Organisation
unit code",
"type":
"[Link]",
"hidden":
false,
"meta":
false
},
{
"name":
"ou",
"column":
"Organisation
unit",
"type":
"[Link]",
"hidden":
false,
"meta":
false
},
{
"name":
"oZg33kd9taw",
"column":
"Gender",
"type":
"[Link]",
"hidden":
false,
"meta":
false
},
{
"name":
"qrur9Dvnyt5",
"column":
"Age",
"type":
"[Link]",
"hidden":
false,
195
1 Web API 1.44.3 Event query analytics
"meta":
false
}
],
"metaData":
{
"names":
{
"qrur9Dvnyt5":
"Age",
"eBAyeGv0exc": "Inpatient
morbidity and mortality",
"ImspTQPwCqd":
"Sierra
Leone",
"O6uvpzGd5pu":
"Bo",
"YuQRtpLP10I":
"Badjia",
"oZg33kd9taw":
"Gender"
},
"ouHierarchy":
{
"YuQRtpLP10I": "/
ImspTQPwCqd/
O6uvpzGd5pu"
}
},
"width":
8,
"height":
4,
"rows":
[
[
"yx9IDINf82o",
"Zj7UnCAulEk",
"2016-08-05",
"[5.12,
1.23]",
"Ngelehun",
"OU_559",
"YuQRtpLP10I",
"Female",
"50"
],
[
"IPNa7AsCyFt",
"Zj7UnCAulEk",
"2016-06-12",
196
1 Web API 1.44.3 Event query analytics
"[5.22,
1.43]",
"Ngelehun",
"OU_559",
"YuQRtpLP10I",
"Female",
"50"
],
[
"ZY9JL9dkhD2",
"Zj7UnCAulEk",
"2016-06-15",
"[5.42,
1.33]",
"Ngelehun",
"OU_559",
"YuQRtpLP10I",
"Female",
"50"
],
[
"MYvh4WAUdWt",
"Zj7UnCAulEk",
"2016-06-16",
"[5.32,
1.53]",
"Ngelehun",
"OU_559",
"YuQRtpLP10I",
"Female",
"50"
]
]
}
The headers section of the response describes the content of the query result. The event unique
identifier, the program stage identifier, the event date, the organisation unit name, the
organisation unit code and the organisation unit identifier appear as the first six dimensions in
the response and will always be present. Next comes the data elements, person attributes and
person identifiers which were specified as dimensions in the request, in this case, the “Gender”
and “Age” data element dimensions. The header section contains the identifier of the dimension
item in the “name” property and a readable dimension description in the “column” property.
The metaData section, ou object contains the identifiers of all organisation units present in the
response mapped to a string representing the hierarchy. This hierarchy string lists the identifiers
197
1 Web API 1.44.4 Event aggregate analytics
of the ancestors (parents) of the organisation unit starting from the root. The names object
contains the identifiers of all items in the response mapped to their names.
The rows section contains the events produced by the query. Each row represents exactly one
event.
In order to have the event analytics resource generate the data in the shape of a ready-made
table, you can provide rows and columns parameters with requested dimension identifiers
separated by semi-colons as values to indicate which ones to use as table columns and rows.
Instead of generating a plain, normalized data source, the event analytics resource will now
generate the data in table layout. The column and rows dimensions must be present as a data
dimension in the query (not a filter). Such a request can look like this:
/api/33/[Link]+css?dimension=dx:cYeuwXTCPkU;fbfJHSPpUQD&dimension=pe:WEEKS_THIS_YEAR
&filter=ou:ImspTQPwCqd&displayProperty=SHORTNAME&columns=dx&rows=pe
/api/33/analytics/events/aggregate
The events aggregate resource does not return the event information itself, rather the aggregate
numbers of events matching the request query. Event dimensions include data elements, person
attributes, person identifiers, periods and organisation units. Aggregate event queries should be
on the format described below.
/api/33/analytics/events/aggregate/<program-id>?startDate=yyyy-MM-dd&endDate=yyyy-MM-dd
&dimension=ou:<ou-id>;<ou-id>&dimension=<item-id>&dimension=<item-id>:<operator>:<filter>
For example, to retrieve aggregate numbers for events from the “Inpatient morbidity and
mortality” program between January and October 2016, where the “Gender” and “Age” data
elements are included, the “Age” dimension item is filtered on “18” and the “Gender” item is
filtered on “Female”, you can use the following query:
/api/33/analytics/events/aggregate/eBAyeGv0exc?startDate=2016-01-01&endDate=2016-10-31
&dimension=ou:O6uvpzGd5pu&dimension=oZg33kd9taw:EQ:Female&dimension=qrur9Dvnyt5:GT:50
To retrieve data for fixed and relative periods instead of start and end date, in this case, May
2016 and last 12 months, and the organisation unit associated with the current user, you can use
the following query:
/api/33/analytics/events/aggregate/eBAyeGv0exc?dimension=pe:201605;LAST_12_MONTHS
&dimension=ou:USER_ORGUNIT;fdc6uOvgo7ji&dimension=oZg33kd9taw
In order to specify “Female” as a filter for “Gender” for the data response, meaning “Gender” will
not be part of the response but will filter the aggregate numbers in it, you can use the following
syntax:
198
1 Web API 1.44.4 Event aggregate analytics
/api/33/analytics/events/aggregate/eBAyeGv0exc?dimension=pe:2016;
&dimension=ou:O6uvpzGd5pu&filter=oZg33kd9taw:EQ:Female
To specify the “Bo” organisation unit and the period “2016” as filters, and the “Mode of discharge”
and Gender" as dimensions, where “Gender” is filtered on the “Male” item, you can use a query
like this:
/api/33/analytics/events/aggregate/eBAyeGv0exc?filter=pe:2016&filter=ou:O6uvpzGd5pu
&dimension=fWIAEtYVEGk&dimension=oZg33kd9taw:EQ:Male
To create a “top 3 report” for “Mode of discharge” you can use the limit and sortOrder query
parameters similar to this:
/api/33/analytics/events/aggregate/eBAyeGv0exc?filter=pe:2016&filter=ou:O6uvpzGd5pu
&dimension=fWIAEtYVEGk&limit=3&sortOrder=DESC
To specify a value dimension with a corresponding aggregation type you can use the value and
aggregationType query parameters. Specifying a value dimension will make the analytics engine
return aggregate values for the values of that dimension in the response as opposed to counts of
events.
/api/33/analytics/events/aggregate/[Link]?stage=Zj7UnCAulEk&dimension=ou:ImspTQPwCqd
&dimension=pe:LAST_12_MONTHS&dimension=fWIAEtYVEGk&value=qrur9Dvnyt5&aggregationType=AVERAGE
To base event analytics aggregation on a specific data element or attribute of value type date or
date time you can use the timeField parameter:
/api/33/analytics/events/aggregate/[Link]?dimension=ou:ImspTQPwCqd
&dimension=pe:LAST_12_MONTHS&dimension=cejWyOfXge6&stage=A03MvHHogjR&timeField=ENROLLMENT_DATE
To base event analytics aggregation on a specific data element or attribute of value type
organisation unit you can use the orgUnitField parameter:
/api/33/analytics/events/aggregate/[Link]?dimension=ou:ImspTQPwCqd
&dimension=pe:THIS_YEAR&dimension=oZg33kd9taw&stage=Zj7UnCAulEk&orgUnitField=S33cRBsnXPo
For aggregate queries, you can specify a range / legend set for numeric data element and
attribute dimensions. The purpose is to group the numeric values into ranges. As an example,
instead of generating data for an “Age” data element for distinct years, you can group the
information into age groups. To achieve this, the data element or attribute must be associated
with the legend set. The format is described below:
?dimension=<item-id>-<legend-set-id>
199
1 Web API 1.44.4 Event aggregate analytics
/api/33/analytics/events/aggregate/[Link]?stage=Zj7UnCAulEk
&dimension=qrur9Dvnyt5-Yf6UHoPkdS6&dimension=ou:ImspTQPwCqd&dimension=pe:LAST_MONTH
The default response representation format is JSON. The requests must be using the HTTP GET
method. The response will look similar to this:
"headers":
[
{
"name":
"oZg33kd9taw",
"column":
"Gender",
"type":
"[Link]",
"meta":
false
},
{
"name":
"qrur9Dvnyt5",
"column":
"Age",
"type":
"[Link]",
"meta":
false
},
{
"name":
"pe",
"column":
"Period",
"type":
"[Link]",
"meta":
false
},
{
"name":
"ou",
"column":
"Organisation
unit",
"type":
"[Link]",
"meta":
false
},
{
"name":
"value",
200
1 Web API 1.44.4 Event aggregate analytics
"column":
"Value",
"type":
"[Link]",
"meta":
false
}
],
"metaData":
{
"names":
{
"eBAyeGv0exc": "Inpatient
morbidity and mortality"
}
},
"width":
5,
"height":
39,
"rows":
[
["Female",
"95",
"201605",
"O6uvpzGd5pu",
"2"],
["Female",
"63",
"201605",
"O6uvpzGd5pu",
"2"],
["Female",
"67",
"201605",
"O6uvpzGd5pu",
"1"],
["Female",
"71",
"201605",
"O6uvpzGd5pu",
"1"],
["Female",
"75",
"201605",
"O6uvpzGd5pu",
"14"],
["Female",
"73",
"201605",
"O6uvpzGd5pu",
"5"]
]
}
Note that the max limit for rows to return in a single response is 10 000. If the query produces
more than the max limit, a 409 Conflict status code will be returned.
201
1 Web API 1.44.5 Event clustering analytics
The analytics/events/cluster resource provides clustered geospatial event data. A request looks
like this:
/api/33/analytics/events/cluster/eBAyeGv0exc?startDate=2016-01-01&endDate=2016-10-31
&dimension=ou:LEVEL-2&clusterSize=100000
&bbox=-13.2682125,7.3721619,-10.4261178,9.904012&includeClusterPoints=false
The cluster response provides the count of underlying points, the center point and extent of each
cluster. If the includeClusterPoints query parameter is set to true, a comma-separated
string with the identifiers of the underlying events is included. A sample response looks like this:
"headers":
[
{
"name":
"count",
"column":
"Count",
"type":
"[Link]",
"meta":
false
},
{
"name":
"center",
"column":
"Center",
"type":
"[Link]",
"meta":
false
},
{
"name":
"extent",
"column":
"Extent",
"type":
"[Link]",
"meta":
false
},
{
"name":
"points",
"column":
"Points",
"type":
"[Link]",
"meta":
false
}
202
1 Web API 1.44.6 Event count and extent analytics
],
"width":
3,
"height":
4,
"rows":
[
[
"3",
"POINT(-13.15818
8.47567)",
"BOX(-13.26821 8.4St7215,-13.08711
8.47807)",
""
],
[
"9",
"POINT(-13.11184
8.66424)",
"BOX(-13.24982 8.51961,-13.05816
8.87696)",
""
],
[
"1",
"POINT(-12.46144
7.50597)",
"BOX(-12.46144 7.50597,-12.46144
7.50597)",
""
],
[
"7",
"POINT(-12.47964
8.21533)",
"BOX(-12.91769 7.66775,-12.21011
8.49713)",
""
]
]
}
The analytics/events/count resource is suitable for geometry-related requests for retrieving the
count and extent (bounding box) of events for a specific query. The query syntax is equal to the
events/query resource. A request looks like this:
/api/33/analytics/events/count/eBAyeGv0exc?startDate=2016-01-01
&endDate=2016-10-31&dimension=ou:O6uvpzGd5pu
The response will provide the count and extent in JSON format:
{
"extent": "BOX(-13.2682125910096 7.38679562779441,-10.4261178860988
9.90401290212795)",
203
1 Web API 1.44.7 Constraints and validation
"count":
59
}
There are several constraints to the input parameters you can provide to the event analytics
resource. If any of the constraints are violated, the API will return a 409 Conflict response and a
response message looking similar to this:
"httpStatus":
"Conflict",
"httpStatusCode":
409,
"status":
"ERROR",
"message": "At least one organisation unit
must be specified",
"errorCode":
"E7200"
}
The possible validation errors for the event analytics API are described in the table below.
204
1 Web API 1.45 Enrollment analytics
The enrollment analytics API lets you access aggregated event data and query enrollments with
their event data captured in DHIS2. This resource lets you retrieve data for a program based on
program stages and data elements - in addition to tracked entity attributes. When querying event
data for a specific programstages within each enrollment, the data element values for each
program stage will be returned as one row in the response from the api. If querying a data
element in a program stage that is repeatable, the newest data element value will be used for
that data element in the api response.
Enrollment dimensions include data elements, attributes, organisation units and periods. The
query analytics resource will simply return enrollments matching a set of criteria and does not
perform any aggregation.
Enrollment dimensions
Dimension
Dimension Description
id
Data elements in <program Data element identifiers must include the program
program stages stage stage when querying data for enrollments.
id>.<data
element
dimension=[Link]
id>
The analytics/enrollments/query resource lets you query for captured enrollments. This resource
does not perform any aggregation, rather it lets you query and filter for information about
enrollments.
/api/33/analytics/enrollments/query
You can specify any number of dimensions and any number of filters in a query. Dimension item
identifiers can refer to any of the data elements in program stages, tracked entity attributes,
fixed and relative periods and organisation units. Dimensions can optionally have a query
operator and a filter. Enrollment queries should be on the format described below.
205
1 Web API 1.45.2 Enrollment query analytics
/api/33/analytics/enrollments/query/<program-id>?startDate=yyyy-MM-dd&endDate=yyyy-MM-dd
&dimension=ou:<ou-id>;<ou-id>&dimension=<item-id>&dimension=<item-id>:<operator>:<filter>
For example, to retrieve enrollments in the from the “Antenatal care” program from January
2019, where the “First name” is picked up from attributes, “Chronic conditions” and “Smoking”
data elements are included from the first program stage, and “Hemoglobin value” from the
following program stage, and only women that have “Cronic conditions” would be included, you
can use the following query:
/api/33/analytics/enrollments/query/[Link]?dimension=ou:ImspTQPwCqd
&dimension=w75KJ2mc4zz&dimension=WZbXY0S00lP.de0FEHSIoxh:eq:1&dimension=w75KJ2mc4zz
&dimension=[Link]&dimension=[Link]
&startDate=2019-01-01&endDate=2019-01-31
To retrieve enrollments in the from the “Antenatal care” program from last month (relative to the
point in time the query is executed), where the “Chronic conditions” and “Smoking” data elements
are included from the first program stage, and “Hemoglobin value” from the followup program
stage, only including smoking women with hemoglobin less than 20:
/api/33/analytics/enrollments/query/[Link]?dimension=ou:ImspTQPwCqd
&dimension=WZbXY0S00lP.de0FEHSIoxh&dimension=w75KJ2mc4zz
&dimension=[Link]:eq:1&dimension=[Link]:lt:20
&dimension=pe:LAST_MONTH
Sorting can be applied to the query for the enrollment and incident dates of the enrollment:
/api/33/analytics/enrollments/query/[Link]?dimension=ou:ImspTQPwCqd
&columns=w75KJ2mc4zz&dimension=[Link]&dimension=pe:LAST_MONTH
&stage=WZbXY0S00lP&pageSize=10&page=1&asc=ENROLLMENTDATE&ouMode=DESCENDANTS
Paging can be applied to the query by specifying the page number and the page size parameters.
If page number is specified but page size is not, a page size of 50 will be used. If page size is
specified but page number is not, a page number of 1 will be used. To get the second page of the
response with a page size of 10 you can use a query like this:
/api/33/analytics/enrollments/query/[Link]?dimension=ou:ImspTQPwCqd
&dimension=WZbXY0S00lP.de0FEHSIoxh&dimension=w75KJ2mc4zz&dimension=pe:LAST_MONTH
&dimension=[Link]&pageSize=10&page=2
[Link] Filtering
Filters can be applied to data elements, person attributes and person identifiers. The filtering is
done through the query parameter value on the following format:
&dimension=<item-id>:<operator>:<filter-value>
As an example, you can filter the “Weight” data element for values greater than 2000 and lower
than 4000 like this:
206
1 Web API 1.45.3 Request query parameters
&dimension=WZbXY0S00lP.UXz7xuGCEhU:GT:2000&dimension=WZbXY0S00lP.UXz7xuGCEhU:LT:4000
You can filter the “Age” attribute for multiple, specific ages using the IN operator like this:
&dimension=qrur9Dvnyt5:IN:18;19;20
You can specify multiple filters for a given item by repeating the operator and filter components,
all separated with semi-colons:
&dimension=qrur9Dvnyt5:GT:5:LT:15
Filter operators
Operator Description
EQ Equal to
GT Greater than
GE Greater than or equal to
LT Less than
LE Less than or equal to
NE Not equal to
LIKE Like (free text match)
IN Equal to one of multiple values separated by “;”
The analytics enrollment query API lets you specify a range of query parameters.
Options (default
Query parameter Required Description
first)
program Yes Program identifier. Any program
identifier
startDate No Start date for enrollments. Date in yyyy-MM-
dd format
endDate No End date for enrollments. Date in yyyy-MM-
dd format
207
1 Web API 1.45.3 Request query parameters
Options (default
Query parameter Required Description
first)
dimension Yes Dimension identifier including Operators can be
data elements, attributes, EQ | GT | GE | LT
program indicators, periods, | LE | NE | LIKE |
organisation units and IN
organisation unit group sets.
Parameter can be repeated any
number of times. Item filters can
be applied to a dimension on the
format <item-
id>:<operator>:<filter>. Filter
values are case-insensitive.
filter No Dimension identifier including
data elements, attributes,
periods, organisation units and
organisation unit group sets.
Parameter can be repeated any
number of times. Item filters can
be applied to a dimension on the
format <item-
id>:<operator>:<filter>. Filter
values are case-insensitive.
programStatus No Specify enrollment status of ACTIVE |
enrollments to include. COMPLETED |
CANCELLED
relativePeriodDate string No Date identifier e.g:
“2016-01-01”.
Overrides the start
date of the relative
period
ouMode No The mode of selecting DESCENDANTS,
organisation units. Default is CHILDREN,
DESCENDANTS, meaning all sub SELECTED
units in the hierarchy. CHILDREN
refers to immediate children in
the hierarchy; SELECTED refers
to the selected organisation units
only.
asc No Dimensions to be sorted ENROLLMENTDATE
ascending, can reference | INCIDENTDATE|
enrollment date, incident date, OUNAME |
org unit name and code. OUCODE
desc No Dimensions to be sorted ENROLLMENTDATE
descending, can reference | INCIDENTDATE|
enrollment date, incident date, OUNAME |
org unit name and code. OUCODE
hierarchyMeta No Include names of organisation false | true
unit ancestors and hierarchy
paths of organisation units in the
metadata.
208
1 Web API 1.45.3 Request query parameters
Options (default
Query parameter Required Description
first)
coordinatesOnly No Whether to only return false | true
enrollments which have
coordinates.
page No The page number. Default page Numeric positive
is 1. value
pageSize No The page size. Default size is 50 Numeric zero or
items per page. positive value
The default response representation format is JSON. The requests must be using the HTTP GET
method. The following response formats are supported.
• json (application/json)
• xml (application/xml)
• xls (application/[Link]-excel)
• csv (application/csv)
• html (text/html)
• html+css (text/html)
As an example, to get a response in Excel format you can use a file extension in the request URL
like this:
/api/33/analytics/enrollments/query/[Link]?dimension=ou:ImspTQPwCqd
&dimension=WZbXY0S00lP.de0FEHSIoxh&columns=w75KJ2mc4zz
&dimension=[Link]&dimension=pe:LAST_MONTH&stage=WZbXY0S00lP
&pageSize=10&page=1&asc=ENROLLMENTDATE&ouMode=DESCENDANTS
"headers":
[
{
"name":
"pi",
"column":
"Enrollment",
"valueType":
"TEXT",
"type":
"[Link]",
"hidden":
false,
"meta":
true
},
{
"name":
"tei",
209
1 Web API 1.45.3 Request query parameters
"column":
"Tracked
entity
instance",
"valueType":
"TEXT",
"type":
"[Link]",
"hidden":
false,
"meta":
true
},
{
"name":
"enrollmentdate",
"column":
"Enrollment
date",
"valueType":
"DATE",
"type":
"[Link]",
"hidden":
false,
"meta":
true
},
{
"name":
"incidentdate",
"column":
"Incident
date",
"valueType":
"DATE",
"type":
"[Link]",
"hidden":
false,
"meta":
true
},
{
"name":
"geometry",
"column":
"Geometry",
"valueType":
"TEXT",
"type":
"[Link]",
"hidden":
false,
"meta":
true
},
{
210
1 Web API 1.45.3 Request query parameters
"name":
"longitude",
"column":
"Longitude",
"valueType":
"NUMBER",
"type":
"[Link]",
"hidden":
false,
"meta":
true
},
{
"name":
"latitude",
"column":
"Latitude",
"valueType":
"NUMBER",
"type":
"[Link]",
"hidden":
false,
"meta":
true
},
{
"name":
"ouname",
"column":
"Organisation
unit name",
"valueType":
"TEXT",
"type":
"[Link]",
"hidden":
false,
"meta":
true
},
{
"name":
"oucode",
"column":
"Organisation
unit code",
"valueType":
"TEXT",
"type":
"[Link]",
"hidden":
false,
"meta":
true
},
{
211
1 Web API 1.45.3 Request query parameters
"name":
"ou",
"column":
"Organisation
unit",
"valueType":
"TEXT",
"type":
"[Link]",
"hidden":
false,
"meta":
true
},
{
"name":
"de0FEHSIoxh",
"column": "WHOMCH
Chronic
conditions",
"valueType":
"BOOLEAN",
"type":
"[Link]",
"hidden":
false,
"meta":
true
},
{
"name":
"sWoqcoByYmD",
"column":
"WHOMCH
Smoking",
"valueType":
"BOOLEAN",
"type":
"[Link]",
"hidden":
false,
"meta":
true
}
],
"metaData":
{
"pager":
{
"page":
2,
"total":
163,
"pageSize":
4,
"pageCount":
41
212
1 Web API 1.45.3 Request query parameters
},
"items":
{
"ImspTQPwCqd":
{
"name":
"Sierra
Leone"
},
"PFDfvmGpsR3":
{
"name":
"Care
at
birth"
},
"bbKtnxRZKEP":
{
"name":
"Postpartum
care visit"
},
"ou":
{
"name":
"Organisation
unit"
},
"PUZaKR0Jh2k":
{
"name":
"Previous
deliveries"
},
"edqlbukwRfQ":
{
"name":
"Antenatal
care visit"
},
"WZbXY0S00lP":
{
"name": "First
antenatal care
visit"
},
"sWoqcoByYmD":
{
"name":
"WHOMCH
Smoking"
},
"WSGAb5XwJ3Y":
{
"name":
"WHO RMNCH
Tracker"
},
"de0FEHSIoxh":
{
"name": "WHOMCH
Chronic conditions"
}
213
1 Web API 1.45.3 Request query parameters
},
"dimensions":
{
"pe":
[],
"ou":
["ImspTQPwCqd"],
"sWoqcoByYmD":
[],
"de0FEHSIoxh":
[]
}
},
"width":
12,
"rows":
[
[
"A0cP533hIQv",
"to8G9jAprnx",
"2019-02-02
[Link].0",
"2019-02-02
[Link].0",
"",
"0.0",
"0.0",
"Tonkomba
MCHP",
"OU_193264",
"xIMxph4NMP1",
"0",
"1"
],
[
"ZqiUn2uXmBi",
"SJtv0WzoYki",
"2019-02-02
[Link].0",
"2019-02-02
[Link].0",
"",
"0.0",
"0.0",
"Mawoma
MCHP",
"OU_254973",
"Srnpwq8jKbp",
"0",
214
1 Web API 1.45.3 Request query parameters
"0"
],
[
"lE747mUAtbz",
"PGzTv2A1xzn",
"2019-02-02
[Link].0",
"2019-02-02
[Link].0",
"",
"0.0",
"0.0",
"Kunsho
CHP",
"OU_193254",
"tdhB1JXYBx2",
"",
"0"
],
[
"nmcqu9QF8ow",
"pav3tGLjYuq",
"2019-02-03
[Link].0",
"2019-02-03
[Link].0",
"",
"0.0",
"0.0",
"Korbu
MCHP",
"OU_678893",
"m73lWmo5BDG",
"",
"1"
]
],
"height":
4
}
The headers section of the response describes the content of the query result. The enrollment
unique identifier, the tracked entity instance identifier, the enrollment date, the incident date,
geometry, latitude, longitude, the organisation unit name and the organisation unit code appear
as the first dimensions in the response and will always be present. Next comes the data
elements, and tracked entity attributes which were specified as dimensions in the request, in this
case, the “WHOMCH Chronic conditions” and “WHOMCH smoking” data element dimensions. The
header section contains the identifier of the dimension item in the “name” property and a
readable dimension description in the “column” property.
215
1 Web 1.45.4 Support of analytics across tracked entity instance relationships with program
API indicators
The metaData section, ou object contains the identifiers of all organisation units present in the
response mapped to a string representing the hierarchy. This hierarchy string lists the identifiers
of the ancestors (parents) of the organisation unit starting from the root. The names object
contains the identifiers of all items in the response mapped to their names.
The rows section contains the enrollments produced by the query. Each row represents exactly
one enrollment.
1.45.4 Support of analytics across tracked entity instance relationships with program indicators
The non-aggregation enrollment analytics API also supports linking Program Indicators to
Relationship Types, in order to show the result of a calculation of a specific Program Indicator
applied to the related entities of the listed Tracked Entity Instance.
/api/33/analytics/enrollments/query/<program-id>
?dimension=<relationshiptype-id>.<programindicator-id>
For example, to retrieve a list of enrollments from the “WHO RMNCH Tracker” program for
January 2019 and display the count of Malaria Cases linked to that Enrollment by “Malaria case
linked to person” type of relationship, you can use the following query
/api/33/analytics/enrollments/query/[Link]?dimension=mxZDvSZYxlw.nFICjJluo74
&startDate=2019-01-01&endDate=2019-01-31
216
1 Web API 1.46 Org unit analytics
The API supports using program indicators which are not associated to the “main” program (that
is the program ID specified after /query/).
The org unit analytics API provides statistics on org units classified by org unit group sets,
i.e. counts of org units per org unit group within org unit group sets.
GET /api/orgUnitAnalytics?ou=<org-unit-id>&ougs=<org-unit-group-set-id>
The API requires at least one organisation unit and at least one organisation unit group set.
Multiple org units and group sets can be provided separated by a semicolon.
The org unit analytics resource lets you specify a range of query parameters:
The response will contain a column for the parent org unit, columns for each org unit group set
part of the request and a column for the count. The statistics include the count of org units which
are part of the sub-hierarchy of the org units specified in the request. The response contains a
metadata section which specifies the name of each org unit and org unit group part of the
response referenced by their identifiers.
The default response is normalized with a single count column. The response can be rendered in
a table layout by specifying at least one org unit group set using the columns query parameter.
The org unit analytics endpoint supports the following representation formats:
• json (application/json)
• csv (application/csv)
• xls (application/[Link]-excel)
• pdf (application/pdf)
1.46.3 Examples
To fetch org unit analytics for an org unit and org unit group set:
GET /api/orgUnitAnalytics?ou=lc3eMKXaEfw&ougs=J5jldMd8OHv
To fetch org unit analytics data for two org units and two org unit group sets:
217
1 Web API 1.47 Data set report
GET /api/orgUnitAnalytics?ou=lc3eMKXaEfw;PMa2VCrupOd&ougs=J5jldMd8OHv;Bpx0589u8y0
To fetch org unit analytics data in table mode with one group set rendered as columns:
GET /api/orgUnitAnalytics?ou=fdc6uOvgoji;jUb8gELQApl;lc3eMKXaEfw;PMa2VCrupOd
&ougs=J5jldMd8OHv&columns=J5jldMd8OHv
Data set reports can be generated through the web api using the /dataSetReport resource.
This resource generates reports on data set and returns the result in the form of an HTML table.
/api/33/dataSetReport
The data set report resource accepts GET requests only. The response content type is
application/json and returns data in a grid. This endpoint works for all types of data sets,
including default, section and custom forms.
An example request to retrieve a report for a data set and org unit for 2018 looks like this:
GET /api/33/dataSetReport?ds=BfMAe6Itzgt&pe=201810&ou=ImspTQPwCqd&selectedUnitOnly=false
To get a data set report with a filter you can use the filter parameter. In this case, the filter is
based on an org unit group set and two org unit groups:
GET /api/33/dataSetReport?ds=BfMAe6Itzgt&pe=201810&ou=ImspTQPwCqd
&filter=J5jldMd8OHv:RXL3lPSK8oG;tDZVQ1WtwpA
218
1 Web API 1.47.2 Response formats
The data set report endpoint supports output in the following formats. You can retrieve a specific
endpoint using the file extension or Accept HTTP header.
• json (application/json)
• pdf (application/pdf)
• xls (application/[Link]-excel)
A dedicated endpoint is available for data sets with custom HTML forms. This endpoint returns
the HTML form content with content type text/html with data inserted into it. Note that you
can use the general data set report endpoint also for data sets with custom forms; however, that
will return the report in JSON format as a grid. This endpoint only works for data sets with
custom HTML forms.
GET /api/33/dataSetReport/custom
The syntax for this endpoint is otherwise equal to the general data set report endpoint. To
retrieve a custom HTML data set report you can issue a request like this:
GET /api/33/dataSetReport/custom?ds=lyLU2wR22tC&pe=201810&ou=ImspTQPwCqd
The push analysis API includes endpoints for previewing a push analysis report for the logged in
user and manually triggering the system to generate and send push analysis reports, in addition
to the normal CRUD operations. When using the create and update endpoints for push analysis,
the push analysis will be scheduled to run based on the properties of the push analysis. When
deleting or updating a push analysis to be disabled, the job will also be stopped from running in
the future.
To get an HTML preview of an existing push analysis, you can do a GET request to the following
endpoint:
/api/33/pushAnalysis/<id>/render
To manually trigger a push analysis job, you can do a POST request to this endpoint:
/api/33/pushAnalysis/<id>/run
A push analysis consists of the following properties, where some are required to automatically
run push analysis jobs:
219
1 Web API 1.49 Data usage analytics
The usage analytics API lets you access information about how people are using DHIS2 based on
data analysis. When users access favorites, an event is recorded. The event consists of the user
name, the UID of the favorite, when the event took place, and the type of event. The different
types of events are listed in the table.
/api/33/dataStatistics
The usage analytics API lets you retrieve aggregated snapshots of usage analytics based on time
intervals. The API captures user views (for example the number of times a chart or pivot table has
been viewed by a user) and saved analysis favorites (for example favorite charts and pivot
tables). DHIS2 will capture nightly snapshots which are then aggregated at request.
220
1 Web API 1.49.2 Create view events (POST)
The usage analytics API lets you create event views. The dataStatisticsEventType parameter
describes what type of item was viewed. The favorite parameter indicates the identifier of the
relevant favorite.
POST /api/33/dataStatistics?eventType=CHART_VIEW&favorite=LW0O27b7TdD
A successful save operation returns an HTTP status code 201. The table below shows the
supported types of events.
Key Description
REPORT_TABLE_VIEW Report table (pivot table) view
CHART_VIEW Chart view
MAP_VIEW Map view (GIS)
EVENT_REPORT_VIEW Event report view
EVENT_CHART_VIEW Event chart view
DASHBOARD_VIEW Dashboard view
DATA_SET_REPORT_VIEW Data set report view
The usage analytics (data statistics) API lets you specify certain query parameters when asking for
an aggregated report.
The startDate and endDate parameters specify the period for which snapshots are to be used in
the aggregation. You must format the dates as shown above. If no snapshots are saved in the
specified period, an empty list is sent back. The parameter called interval specifies what type of
aggregation will be done.
GET /api/33/dataStatistics?startDate=2014-01-02&endDate=2016-01-01&interval=MONTH
221
1 Web API 1.49.4 Retrieve top favorites
The usage analytics API lets you retrieve the top favorites used in DHIS2, and by user.
The API query can be used without a username, and will then find the top favorites of the system.
/api/33/dataStatistics/favorites?eventType=CHART_VIEW&pageSize=25&sortOrder=ASC
If the username is specified, the response will only contain the top favorites of that user.
/api/33/dataStatistics/favorites?eventType=CHART_VIEW&pageSize=25&sortOrder=ASC&username=admin
You can return the aggregated data in a usage analytics response in several representation
formats. The default format is JSON. The available formats and content types are:
• json (application/json)
• xml (application/xml)
• html (text/html)
/api/33/[Link]?startDate=2014-01-01&endDate=2016-01-01&interval=WEEK
You must retrieve the aggregated usage analytics response with the HTTP GET method. This
allows you to link directly from Web pages and other HTTP-enabled clients to usage analytics
responses. To do functional testing use the cURL library.
Execute this command against the demo database to get an usage analytics response in JSON
format:
curl "[Link]/demo/api/33/dataStatistics?
startDate=2016-02-01&endDate=2016-02-14
&interval=WEEK" -u
admin:district
222
1 Web API 1.49.5 Response format
[
{
"year":
2016,
"week":
5,
"mapViews":
2181,
"chartViews":
2227,
"reportTableViews":
5633,
"eventReportViews":
6757,
"eventChartViews":
9860,
"dashboardViews":
10082,
"totalViews":
46346,
"averageViews":
468,
"averageMapViews":
22,
"averageChartViews":
22,
"averageReportTableViews":
56,
"averageEventReportViews":
68,
"averageEventChartViews":
99,
"averageDashboardViews":
101,
"savedMaps":
1805,
"savedCharts":
2205,
"savedReportTables":
1995,
"savedEventReports":
1679,
"savedEventCharts":
1613,
"savedDashboards":
0,
"savedIndicators":
1831,
"activeUsers":
99,
223
1 Web API 1.49.5 Response format
"users":
969
},
{
"year":
2016,
"week":
6,
"mapViews":
2018,
"chartViews":
2267,
"reportTableViews":
4714,
"eventReportViews":
6697,
"eventChartViews":
9511,
"dashboardViews":
12181,
"totalViews":
47746,
"averageViews":
497,
"averageMapViews":
21,
"averageChartViews":
23,
"averageReportTableViews":
49,
"averageEventReportViews":
69,
"averageEventChartViews":
99,
"averageDashboardViews":
126,
"savedMaps":
1643,
"savedCharts":
1935,
"savedReportTables":
1867,
"savedEventReports":
1977,
"savedEventCharts":
1714,
"savedDashboards":
0,
"savedIndicators":
1646,
"activeUsers":
96,
224
1 Web API 1.49.6 Retrieve statistics for a favorite
"users":
953
}
]
You can retrieve the number of view for a specific favorite by using the favorites resource, where
{favorite-id} should be substituted with the identifier of the favorite of interest:
/api/33/dataStatistics/favorites/{favorite-id}.json
The response will contain the number of views for the given favorite and look like this:
"views":
3
}
The geoFeatures resource lets you retrieve geospatial information from DHIS2. Geospatial
features are stored together with organisation units. The syntax for retrieving features is identical
to the syntax used for the organisation unit dimension for the analytics resource. It is
recommended to read up on the analytics api resource before continuing to read this section.
You must use the GET request type, and only JSON response format is supported.
As an example, to retrieve geo features for all organisation units at level 3 in the organisation unit
hierarchy you can use a GET request with the following URL:
/api/33/[Link]?ou=ou:LEVEL-3
To retrieve geo features for organisation units at a level within the boundary of an organisation
unit (e.g. at level 2) you can use this URL:
/api/33/[Link]?ou=ou:LEVEL-4;O6uvpzGd5pu
The semantics of the response properties are described in the following table.
Property Description
id Organisation unit / geo feature identifier
na Organisation unit / geo feature name
hcd Has coordinates down, indicating whether one or more children organisation
units exist with coordinates (below in the hierarchy)
hcu Has coordinates up, indicating whether the parent organisation unit has
coordinates (above in the hierarchy)
le Level of this organisation unit / geo feature.
225
1 Web API 1.50.1 GeoJSON
Property Description
pg Parent graph, the graph of parent organisation unit identifiers up to the root
in the hierarchy
pi Parent identifier, the identifier of the parent of this organisation unit
pn Parent name, the name of the parent of this organisation unit
ty Geo feature type, 1 = point and 2 = polygon or multi-polygon
co Coordinates of this geo feature
1.50.1 GeoJSON
To export GeoJSON, you can simply add .geosjon as an extension to the endpoint /api/
organisationUnits, or you can use the Accept header application/json+geojson.
Two parameters are supported: level (defaults to 1) and parent (defaults to root organisation
units). Both can be included multiple times. Some examples follow.
/api/[Link]?level=2&level=4
/api/[Link]?parent=fdc6uOvgoji&level=3
DHIS2 features a set of generated database tables which are used as a basis for various system
functionality. These tables can be executed immediately or scheduled to be executed at regular
intervals through the user interface. They can also be generated through the Web API as
explained in this section. This task is typically one for a system administrator and not consuming
clients.
The resource tables are used internally by the DHIS2 application for various analysis functions.
These tables are also valuable for users writing advanced SQL reports. They can be generated
with a POST or PUT request to the following URL:
/api/33/resourceTables
The analytics tables are optimized for data aggregation and used currently in DHIS2 for the pivot
table module. The analytics tables can be generated with a POST or PUT request to:
/api/33/resourceTables/analytics
226
1 Web API 1.52 Maintenance
“Data Quality” and “Data Surveillance” can be run through the monitoring task, triggered with the
following endpoint:
/api/33/resourceTables/monitoring
This task will analyse your validation rules, find any violations and persist them as validation
results.
1.52 Maintenance
To perform maintenance you can interact with the maintenance resource. You should use POST
or PUT as a method for requests. The following methods are available.
Analytics table analyze will collects statistics about the contents of analytics tables in the
database.
Expired invitations clear will remove all user account invitations which have expired.
Period pruning will remove periods which are not linked to any data values.
Zero data value removal will delete zero data values linked to data elements where zero data is
defined as not significant:
Soft deleted data value removal will permanently delete soft deleted data values.
227
1 Web API 1.52 Maintenance
Soft deleted program stage instance removal will permanently delete soft deleted events.
Soft deleted program instance removal will permanently delete soft deleted enrollments.
Soft deleted tracked entity instance removal will permanently delete soft deleted tracked entity
instances.
Drop SQL views will drop all SQL views in the database. Note that it will not delete the DHIS2 SQL
view entities.
Create SQL views will recreate all SQL views in the database.
Category option combo update will remove obsolete and generate missing category option
combos for all category combinations.
It is also possible to update category option combos for a single category combo using the
following endpoint.
Cache clearing will clear the application Hibernate cache and the analytics partition caches.
Org unit paths update will re-generate the organisation unit path property. This can be useful
e.g. if you imported org units with SQL.
228
1 Web API 1.53 System resource
Data pruning will remove complete data set registrations, data approvals, data value audits and
data values, in this case for an organisation unit.
Data pruning for data elements, which will remove data value audits and data values.
Metadata validation will apply all metadata validation rules and return the result of the
operation.
App reload will refresh the DHIS2 managed cache of installed apps by reading from the file
system.
Maintenance operations are supported in a batch style with a POST request to the api/
maintenance resource where the operations are supplied as query parameters:
The system resource provides you with convenient information and functions. The system
resource can be found at /api/system.
To generate valid, random DHIS2 identifiers you can do a GET request to this resource:
/api/33/system/id?limit=3
The limit query parameter is optional and indicates how many identifiers you want to be
returned with the response. The default is to return one identifier. The response will contain a
JSON object with an array named codes, similar to this:
{
"codes":
["Y0moqFplrX4",
"WI0VHXuWQuV",
"BRJNBBpu4ki"]
}
229
1 Web API 1.53.2 View system information
• 11 characters long.
To get information about the current system you can do a GET request to this URL:
/api/33/system/info
JSON and JSONP response formats are supported. The system info response currently includes
the below properties.
{
"contextPath":
"http://
[Link]",
"userAgent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36
Chrome/29.0.1547.62",
"version":
"2.13-
SNAPSHOT",
"revision":
"11852",
"buildTime":
"2013-09-01T[Link].000+0000",
"serverDate":
"2013-09-02T[Link].311+0000",
"environmentVariable":
"DHIS2_HOME",
"javaVersion":
"1.7.0_06",
"javaVendor":
"Oracle
Corporation",
"javaIoTmpDir":
"/
tmp",
"javaOpts": "-Xms600m -Xmx1500m -XX:PermSize=400m -
XX:MaxPermSize=500m",
"osName":
"Linux",
"osArchitecture":
"amd64",
"osVersion":
"3.2.0-52-
generic",
"externalDirectory": "/home/
dhis/config/dhis2",
"databaseInfo":
{
"type":
"PostgreSQL",
"name":
"dhis2",
230
1 Web API 1.53.3 Check if username and password combination is correct
"user":
"dhis",
"spatialSupport":
false
},
"memoryInfo": "Mem Total in JVM: 848 Free in JVM:
581 Max Limit: 1333",
"cpuCores":
8
}
Note
If the user who is requesting this resource does not have full authority
in the system then only the first seven properties will be included, as
this information is security sensitive.
To get information about the system context only, i.e. contextPath and userAgent, you can
make a GET request to the below URL. JSON and JSONP response formats are supported:
/api/33/system/context
To check if some user credentials (a username and password combination) is correct you can
make a GET request to the following resource using basic authentication:
/api/33/system/ping
You can detect the outcome of the authentication by inspecting the HTTP status code of the
response header. The meanings of the possible status codes are listed below. Note that this
applies to Web API requests in general.
HTTP
Status Description Outcome
code
200 OK Authentication was successful
302 Found No credentials were supplied with the request - no
authentication took place
401 Unauthorized The username and password combination was incorrect -
authentication failed
Tasks which often take a long time to complete can be performed asynchronously. After initiating
an async task you can poll the status through the system/tasks resource by supplying the task
category and the task identifier of interest.
231
1 Web API 1.53.4 View asynchronous task status
When polling for the task status you need to authenticate as the same user which initiated the
task. The following task categories are supported:
Task categories
Identifier Description
ANALYTICS_TABLE Generation of the analytics tables.
RESOURCE_TABLE Generation of the resource tables.
MONITORING Processing of data surveillance/monitoring validation rules.
DATAVALUE_IMPORT Import of data values.
EVENT_IMPORT Import of events.
ENROLLMENT_IMPORT Import of enrollments.
TEI_IMPORT Import of tracked entity instances.
METADATA_IMPORT Import of metadata.
DATA_INTEGRITY Processing of data integrity checks.
Each asynchronous task is automatically assigned an identifier which can be used to monitor the
status of the task. This task identifier is returned by the API when you initiate an async task
through the various async-enabled endpoints.
You can poll the task status through a GET request to the system tasks resource like this:
/api/33/system/tasks/{task-category-id}/{task-id}
/api/33/system/tasks/DATAVALUE_IMPORT/j8Ki6TgreFw
The response will provide information about the status, such as the notification level, category,
time and status. The completed property indicates whether the process is considered to be
complete.
[
{
"uid":
"hpiaeMy7wFX",
"level":
"INFO",
"category":
"DATAVALUE_IMPORT",
"time":
"2015-09-02T[Link].595+0000",
"message":
"Import
done",
"completed":
true
232
1 Web API 1.53.4 View asynchronous task status
}
]
You can poll all tasks for a specific category through a GET request to the system tasks resource:
/api/33/system/tasks/{task-category-id}
An example request to poll for the status of data value import tasks looks like this:
/api/33/system/tasks/DATAVALUE_IMPORT
You can request a list of all currently running tasks in the system with a GET request to the
system tasks resource:
/api/33/system/tasks
[
{
"EVENT_IMPORT":
{},
"DATA_STATISTICS":
{},
"RESOURCE_TABLE":
{},
"FILE_RESOURCE_CLEANUP":
{},
"METADATA_IMPORT":
{},
"CREDENTIALS_EXPIRY_ALERT":
{},
"SMS_SEND":
{},
"MOCK":
{},
"ANALYTICSTABLE_UPDATE":
{},
"COMPLETE_DATA_SET_REGISTRATION_IMPORT":
{},
"DATAVALUE_IMPORT":
{},
"DATA_SET_NOTIFICATION":
{},
"DATA_INTEGRITY":
{
233
1 Web API 1.53.5 View asynchronous task summaries
"OB1qGRlCzap":
[
"uid":
"LdHQK0PXZyF",
"level":
"INFO",
"category":
"DATA_INTEGRITY",
"time":
"2018-03-26T[Link].171",
"message": "Data integrity checks completed
in 38.31 seconds.",
"completed":
true
}
]
},
"PUSH_ANALYSIS":
{},
"MONITORING":
{},
"VALIDATION_RESULTS_NOTIFICATION":
{},
"REMOVE_EXPIRED_RESERVED_VALUES":
{},
"DATA_SYNC":
{},
"SEND_SCHEDULED_MESSAGE":
{},
"DATAVALUE_IMPORT_INTERNAL":
{},
"PROGRAM_NOTIFICATIONS":
{},
"META_DATA_SYNC":
{},
"ANALYTICS_TABLE":
{},
"PREDICTOR":
{}
}
]
The task summaries resource allows you to retrieve a summary of an asynchronous task
invocation. You need to specify the category and optionally the identifier of the task. The task
identifier can be retrieved from the response of the API request which initiated the asynchronous
task.
To retrieve the summary of a specific task you can issue a request to:
/api/33/system/taskSummaries/{task-category-id}/{task-id}
234
1 Web API 1.53.5 View asynchronous task summaries
/api/33/system/taskSummaries/DATAVALUE_IMPORT/k72jHfF13J1
"responseType":
"ImportSummary",
"status":
"SUCCESS",
"importOptions":
{
"idSchemes":
{},
"dryRun":
false,
"async":
true,
"importStrategy":
"CREATE_AND_UPDATE",
"mergeMode":
"REPLACE",
"reportMode":
"FULL",
"skipExistingCheck":
false,
"sharing":
false,
"skipNotifications":
false,
"datasetAllowsPeriods":
false,
"strictPeriods":
false,
"strictCategoryOptionCombos":
false,
"strictAttributeOptionCombos":
false,
"strictOrganisationUnits":
false,
"requireCategoryOptionCombo":
false,
"requireAttributeOptionCombo":
false,
"skipPatternValidation":
false
},
"description": "Import process
completed successfully",
"importCount":
{
235
1 Web API 1.53.6 Get appearance information
"imported":
0,
"updated":
431,
"ignored":
0,
"deleted":
0
},
"dataSetComplete":
"false"
}
You might also retrieve import summaries for multiple tasks of a specific category with a request
like this:
/api/33/system/taskSummaries/{task-category-id}
You can retrieve the available flag icons in JSON format with a GET request:
/api/33/system/flags
You can retrieve the available UI styles in JSON format with a GET request:
/api/33/system/styles
1.54 Locales
DHIS2 supports translations both for the user interface and for database content.
1.54.1 UI locales
You can retrieve the available locales for the user interface through the following resource with a
GET request. XML and JSON resource representations are supported.
/api/33/locales/ui
You can retrieve and create locales for the database content with GET and POST requests
through the following resource. XML and JSON resource representations are supported.
/api/33/locales/db
DHIS2 allows for translations of database content. You can work with translations through the
Web API using the translations resource.
236
1 Web API 1.55.1 Create a translation
/api/33/translations
"objectId":
"P3jJH5Tu5VC",
"className":
"DataElement",
"locale":
"es",
"property":
"name",
"value": "Casos
de fiebre
amarilla"
}
POST /api/33/translations
The properties which support translations are listed in the table below.
Property names
The classes which support translations are listed in the table below.
Class names
237
1 Web API 1.55.2 Get translations
GET /api/33/translations
You can use the standard filtering technique to fetch translations of interest. E.g. to get all
translations for data elements in the Spanish locale you can use this request:
/api/33/[Link]?fields=*&filter=className:eq:DataElement&filter=locale:eq:es
/api/33/[Link]?fields=*&filter=className:eq:DataElement
&filter=locale:eq:fr&filter=objectId:eq:fbfJHSPpUQD
This section covers the SMS Web API for sending and receiving short text messages.
The Web API supports sending outgoing SMS using the POST method. SMS can be sent to single
or multiple destinations. One or more gateways need to be configured before using the service.
An SMS will not be sent if there is no gateway configured. It needs a set of recipients and
message text in JSON format as shown below.
/api/33/sms/outbound
238
1 Web API 1.56.1 Outbound SMS service
"message":
"Sms
Text",
"recipients":
["004712341234",
"004712341235"]
}
Note
Recipients list will be partitioned if the size exceeds
MAX_ALLOWED_RECIPIENTS limit of 200.
The Web API also supports a query parameter version, but the parameterized API can only be
used for sending SMS to a single destination.
/api/33/sms/outbound?message=text&recipient=004712341234
Response
Response code Detail Description
Message
RESULT_CODE_0 success Message has been sent successfully
RESULT_CODE_1 scheduled Message has been scheduled successfully
RESULT_CODE_22 internal fatal Internal fatal error
error
RESULT_CODE_23 authentication Authentication credentials are incorrect
failure
RESULT_CODE_24 data Parameters provided in request are incorrect
validation
failed
RESULT_CODE_25 insufficient Credit is not enough to send message
credits
RESULT_CODE_26 upstream Upstream credits not available
credits not
available
RESULT_CODE_27 exceeded You have exceeded your daily quota
your daily
quota
RESULT_CODE_40 temporarily Service is temporarily down
unavailable
RESULT_CODE_201 maximum Maximum batch size exceeded
batch size
exceeded
RESULT_CODE_200 success The request was successfully completed
239
1 Web API 1.56.2 Inbound SMS service
Response
Response code Detail Description
Message
RESULT_CODE_202 accepted The message(s) will be processed
RESULT_CODE_207 multi-status More than one message was submitted to the API;
however, not all messages have the same status
RESULT_CODE_400 bad request Validation failure (such as missing/invalid
parameters or headers)
RESULT_CODE_401 unauthorized Authentication failure. This can also be caused by
IP lockdown settings
RESULT_CODE_402 payment Not enough credit to send message
required
RESULT_CODE_404 not found Resource does not exist
RESULT_CODE_405 method not Http method is not support on the resource
allowed
RESULT_CODE_410 gone Mobile number is blocked
RESULT_CODE_429 too many Generic rate limiting error
requests
RESULT_CODE_503 service A temporary error has occurred on our platform -
unavailable please retry
The Web API supports collecting incoming SMS messages using the POST method. Incoming
messages routed towards the DHIS2 Web API can be received using this API. The API collects
inbound SMS messages and provides it to listeners for parsing, based on the SMS content (SMS
Command). An example payload in JSON format is given below. Text, originator, received date
and sent date are mandatory parameters. The rest are optional but the system will use the
default value for these parameters.
/api/33/sms/inbound
"text":
"sample
text",
"originator":
"004712341234",
"gatewayid":
"unknown",
"receiveddate":
"2016-05-01",
"sentdate":
"2016-05-01",
"smsencoding":
"1",
"smsstatus":
"1"
}
240
1 Web API 1.56.3 Gateway service administration
/api/33/sms/inbound?message=text&originator=47XXXXXX&gateway=clickatel
The Web API exposes resources which provide a way to configure and update SMS gateway
configurations.
The list of different gateways configured can be retrieved using a GET method.
GET /api/33/gateways
Configurations can also be retrieved for a specific gateway type using GET method.
GET /api/33/gateways/{uid}
New gateway configuraitons can be added using POST. POST api requires type request
parameter and currently its value can have either one http,bulksms,clickatell. First added
gateway will be set to default. Only one gateway is allowed to be default at one time. Default
gateway can only be changed through its api. If default gateway is removed then the next one the
list will automatically becomes default.
POST /api/33/gateways
Configuration can be updated with by providing uid and gateway configurations as mentioned
below
PUT /api/33/gateways/{uids}
Configurations can be removed for specific gateway type using DELETE method.
DELETE /api/33/gateways/{uid}
241
1 Web API 1.56.4 Gateway configuration
GET /api/33/gateways/default
PUT /api/33/gateways/default/{uid}
The Web API lets you create and update gateway configurations. For each type of gateway there
are different parameters in the JSON payload. Sample JSON payloads for each gateway are given
below. POST is used to create and PUT to update configurations. Header parameter can be used
in case of GenericHttpGateway to send one or more parameter as http header.
[Link] Clickatell
"type":
"clickatell",
"name":
"clickatell",
"username":
"clickatelluser",
"authtoken":
"XXXXXXXXXXXXXXXXXXXX",
"urlTemplate": "https://
[Link]/messages"
}
[Link] Bulksms
"type":
"bulksms",
"name":
"bulkSMS",
"username":
"bulkuser",
"password":
"abc123"
}
"type":
"smpp",
"name":
"smpp
gateway2",
242
1 Web API 1.56.4 Gateway configuration
"systemId":
"smppclient1",
"host":
"localhost",
"systemType":
"cp",
"numberPlanIndicator":
"UNKNOWN",
"typeOfNumber":
"UNKNOWN",
"bindType":
"BIND_TX",
"port":
2775,
"password":
"password",
"compressed":
false
}
"type":
"http",
"name":
"Generic",
"configurationTemplate":
"{\"to\": \"!
[{recipients}\",
\"body\": \"{text}
\",
\"deliveryReport\":
\"](<html>
<head><title>405 Not Allowed</
title></head>
<body
bgcolor="white">
<center><h1>405 Not Allowed</
h1></center>
</body>
</html>
"{recipients}
\",
\"body\":
\"{text}
\",
\"deliveryReport\":
\"")
{deliveryReport}
\"}",
"useGet":
false,
"contentType":
"APPLICATION_JSON",
"urlTemplate":"https://
[Link]/
messages",
"parameters":
[
{
243
1 Web API 1.56.4 Gateway configuration
"header":
true,
"encode":
false,
"key":
"username",
"value":
"user_uio",
"confidential":
true
},
{
"header":
true,
"encode":
false,
"key":
"password",
"value":
"123abcxyz",
"confidential":
true
},
{
"header":
false,
"encode":
false,
"key":
"deliveryReport",
"value":
"yes",
"confidential":
false
}
],
"isDefault":
false
}
244
1 Web API 1.57 SMS Commands
SMS commands are being used to collect data through SMS. These commands belong to specific
parser type. Each parser has different functionality.
GET /api/smsCommands
GET /api/smsCommands/uid
PUT /api/smsCommands/uid
POST /api/smsCommands
DELETE /api/smsCommands/uid
245
1 Web API 1.58 Program Messages
These command types can be used by the Android app for data submission via SMS when
internet is unavailable. The SMS is composed by the Android app.
Program message lets you send messages to tracked entity instances, contact addresses
associated with organisation units, phone numbers and email addresses. You can send messages
through the messages resource.
/api/33/messages
• SMS (SMS)
• Tracked entity instance: The system will look up attributes of value type PHONE_NUMBER
or EMAIL (depending on the specified delivery channels) and use the corresponding
attribute values.
246
1 Web API 1.58.1 Sending program messages
Organisation unit: The system will use the phone number or email information registered
• for the organisation unit.
• List of phone numbers: The system will use the explicitly defined phone numbers.
• List of email addresses: The system will use the explicitly defined email addresses.
Below is a sample JSON payload for sending messages using POST requests. Note that message
resource accepts a wrapper object named programMessages which can contain any number of
program messages.
POST /api/33/messages
"programMessages":
[
{
"recipients":
{
"trackedEntityInstance":
{
"id":
"UN810PwyVYO"
},
"organisationUnit":
{
"id":
"Rp268JB6Ne4"
},
"phoneNumbers":
["55512345",
"55545678"],
"emailAddresses":
["johndoe@[Link]",
"markdoe@[Link]"]
},
"programInstance":
{
"id":
"f3rg8gFag8j"
},
"programStageInstance":
{
"id":
"pSllsjpfLH2"
},
"deliveryChannels":
["SMS",
"EMAIL"],
"subject":
"Outbreak
alert",
"text": "An outbreak
has been detected",
"storeCopy":
false
}
247
1 Web API 1.58.1 Sending program messages
]
}
248
1 Web API 1.58.2 Retrieving and deleting program messages
A minimalistic example for sending a message over SMS to a tracked entity instance looks like
this:
"programMessages":
[
{
"recipients":
{
"trackedEntityInstance":
{
"id":
"PQfMcpmXeFE"
}
},
"programInstance":
{
"id":
"JMgRZyeLWOo"
},
"deliveryChannels":
["SMS"],
"text": "Please make a
visit on Thursday"
}
]
}
GET /api/33/messages
GET /api/33/messages/{uid}
DELETE /api/33/messages/{uid}
The program message API supports program message queries based on request parameters.
Messages can be filtered based on below mentioned query parameters. All requests should use
the GET HTTP verb for retrieving information.
249
1 Web API 1.59 Utilisateurs
Parameter URL
programInstance /api/33/messages?programInstance=6yWDMa0LP7
programStageInstance /api/33/messages?programStageInstance=SllsjpfLH2
trackedEntityInstance /api/33/messages?trackedEntityInstance=xdfejpfLH2
organisationUnit /api/33/messages?ou=Sllsjdhoe3
processedDate /api/33/messages?processedDate=2016-02-01
1.59 Utilisateurs
/api/33/users
The users resource offers additional query parameters beyond the standard parameters
(e.g. paging). To query for users at the users resource you can use the following parameters.
250
1 Web API 1.59.2 User credentials query
A query for max 10 users with “konan” as first name or surname (case in-sensitive) who have a
subset of authorities compared to the current user:
/api/33/users?query=konan&authSubset=true&pageSize=10
An alternative to the previous user query, is to directly query the user credentials (the part where
username, etc., resides) using /api/userCredentials endpoint, it supports all regular field and
object filters as the other endpoints.
/api/33/userCredentials?filter=username:eq:admin
Get username and code from all user credentials where username starts with adm:
/api/33/userCredentials?fields=username,code&filter=username:^like:adm
Both creating and updating a user is supported through the API. The payload itself is similar to
other payloads in the API, so they support collection references etc. A simple example payload to
create would be, the password should be sent in plain text (remember to only use this on a SSL
enabled server) and will be encrypted on the backend:
{
"id":
"Mj8balLULKp",
"firstName":
"John",
"surname":
"Doe",
"email":
"johndoe@[Link]",
"userCredentials":
{
"id":
"lWCkJ4etppc",
"userInfo":
{
"id":
"Mj8balLULKp"
},
"username":
"johndoe123",
"password":
"Your-
password-123",
251
1 Web API 1.59.3 User account create and update
"skype":
"[Link]",
"telegram":
"[Link]",
"whatsApp":
"+1-541-754-3010",
"facebookMessenger":
"[Link]",
"avatar":
{
"id":
"<fileResource
id>"
},
"userRoles":
[
{
"id":
"Ufph3mGRmMo"
}
]
},
"organisationUnits":
[
{
"id":
"Rp268JB6Ne4"
}
],
"userGroups":
[
{
"id":
"wl5cDMuUhmF"
}
]
}
In the user creation payload, user groups are only supported when importing or POSTing a single
user at a time. If you attempt to create more than one user while specifiying user groups, you will
not recieve an error and the users will be created but no user groups will be assigned. This is by
design and is limited because of the many-to-many relationship between Users and User Groups
whereby User Groups is the owner of the relationship. To update or create mulitple users and
their user groups, consider a program to POST one at a time, or POST / import all users followed
by another action to update their user groups while specifiying the new user’s identifiers.
After the user is created, a Location header is sent back with the newly generated ID (you can
also provide your own using /api/system/id endpoint). The same payload can then be used to do
updates, but remember to then use PUT instead of POST and the endpoint is now /api/users/ID.
252
1 Web API 1.59.4 User account invitations
For more info about the full payload available, please see /api/schemas/user.
For more info about uploading and retrieving user avatars, please see the /fileResources
endpoint.
The Web API supports inviting people to create user accounts through the invite resource. To
create an invitation you should POST a user in XML or JSON format to the invite resource. A
specific username can be forced by defining the username in the posted entity. By omitting the
username, the person will be able to specify it herself. The system will send out an invitation
through email. This requires that email settings have been properly configured. The invite
resource is useful in order to securely allow people to create accounts without anyone else
knowing the password or by transferring the password in plain text. The payload to use for the
invite is the same as for creating users. An example payload in JSON looks like this:
"firstName":
"John",
"surname":
"Doe",
"email":
"johndoe@[Link]",
"userCredentials":
{
"username":
"johndoe",
"userRoles":
[
{
"id":
"Euq3XfEIEbx"
}
]
},
"organisationUnits":
[
{
"id":
"ImspTQPwCqd"
}
],
"userGroups":
[
{
"id":
"vAvEltyXGbD"
}
]
}
253
1 Web API 1.59.4 User account invitations
To send out invites for multiple users at the same time you must use a slightly different format.
For JSON:
"users":
[
{
"firstName":
"John",
"surname":
"Doe",
"email":
"johndoe@[Link]",
"userCredentials":
{
"username":
"johndoe",
"userRoles":
[
{
"id":
"Euq3XfEIEbx"
}
]
},
"organisationUnits":
[
{
"id":
"ImspTQPwCqd"
}
]
},
{
"firstName":
"Tom",
"surname":
"Johnson",
"email":
"tomj@[Link]",
"userCredentials":
{
"userRoles":
[
{
"id":
"Euq3XfEIEbx"
}
]
},
"organisationUnits":
[
{
"id":
"ImspTQPwCqd"
254
1 Web API 1.59.5 User replication
}
]
}
]
}
To create multiple invites you can post the payload to the api/users/invites resource like this:
There are certain requirements for user account invitations to be sent out:
• The user to be invited must not be granted user roles with critical authorities (see below).
If any of these requirements are not met the invite resource will return with a 409 Conflict status
code together with a descriptive message.
• ALL
• Scheduling administration
To replicate a user you can use the replica resource. Replicating a user can be useful when
debugging or reproducing issues reported by a particular user. You need to provide a new
username and password for the replicated user which you will use to authenticate later. Note
that you need the ALL authority to perform this action. To replicate a user you can post a JSON
payload looking like below:
"username":
"replica",
"password":
"Replica.
1234"
}
This payload can be posted to the replica resource, where you provide the identifier of the user
to replicate in the URL:
255
1 Web API 1.60 Current user information and associations
/api/33/users/<uid>/replica
In order to get information about the currently authenticated user and its associations to other
resources you can work with the me resource (you can also refer to it by its old name
currentUser). The current user related resources gives your information which is useful when
building clients for instance for data entry and user management. The following describes these
resources and their purpose.
Provides basic information about the user that you are currently logged in as, including
username, user credentials, assigned organisation units:
/api/me
/api/me/dashboard
/api/me/inbox
In order to change password, this end point can be used to validate newly entered password.
Password validation will be done based on PasswordValidationRules configured in the system.
This end point support POST and password string should be sent in POST body.
/api/me/validatePassword
While changing password, this end point (support POST) can be used to verify old password.
Password string should be sent in POST body.
/api/me/verifyPassword
Gives the full profile information for current user. This endpoint support both GET to retrieve
profile and POST to update profile (the exact same format is used):
/api/me/user-account
256
1 Web API 1.61 Paramètres du système
/api/me/authorization
Returns true or false, indicating whether the current user has been granted the given <auth>
authorization:
/api/me/authorization/<auth>
/api/me/organisationUnits
Gives all the datasets assigned to the users organisation units, and their direct children. This
endpoint contains all required information to build a form based on one of our datasets. If you
want all descendants of your assigned organisation units, you can use the query parameter
includeDescendants=true :
/api/me/dataSets
Gives all the programs assigned to the users organisation units, and their direct children. This
endpoint contains all required information to build a form based on one of our datasets. If you
want all descendants of your assigned organisation units, you can use the query parameter
includeDescendants=true :
/api/me/programs
Gives the data approval levels which are relevant to the current user:
/api/me/dataApprovalLevels
You can manipulate system settings by interacting with the systemSettings resource. A system
setting is a simple key-value pair, where both the key and the value are plain text strings. To save
or update a system setting you can make a POST request to the following URL:
/api/33/systemSettings/my-key?value=my-val
Alternatively, you can submit the setting value as the request body, where content type is set to
“text/plain”. As an example, you can use curl like this:
curl "[Link]/demo/api/33/systemSettings/my-
key" -d "My long value"
-H "Content-Type: text/plain" -u
admin:district
To set system settings in bulk you can send a JSON object with a property and value for each
system setting key-value pair using a POST request:
257
1 Web API 1.61 Paramètres du système
"keyApplicationNotification":
"Welcome",
"keyApplicationIntro":
"DHIS2",
"keyApplicationFooter": "Read
more at [Link]"
}
Translations for translatable Setting keys can be set by specifying locale as a query parameter
and translated value which can be specified either as a query param or withing the body payload.
See an example URL:
/api/33/systemSettings/<my-key>?locale=<my-locale>&value=<my-translated-value>
You should replace my-key with your real key and my-val with your real value. To retrieve the
value for a given key (in JSON or plain text) you can make a GET request to the following URL:
/api/33/systemSettings/my-key
/api/33/systemSettings?key=my-key
You can retrieve specific system settings as JSON by repeating the key query parameter:
curl "[Link]/demo/api/33/systemSettings?
key=keyApplicationNotification&key=keyApplicationIntro"
-u
admin:district
/api/33/systemSettings
To retrieve a specific translation for a given translatable key you can specify a locale as query
param:
/api/33/systemSettings/<my-key>?locale=<my-locale>
If present, the translation for the given locale is returned. Otherwise, a default value is returned.
If no locale is specified for the translatable key, the user default UI locale is used to fetch the
correct translation. If the given translation is not present, again, the default value is returned.
258
1 Web API 1.61 Paramètres du système
To delete a system setting, you can make a DELETE request to the URL similar to the one used
above for retrieval. If a translatable key is used, all present translations will be deleted as well.
To delete only a specific translation of translatable key, the same URL as for adding a translation
should be used and the empty value should be provided:
/api/33/systemSettings/<my-key>?locale=<my-locale>&value=
System settings
259
1 Web API 1.61 Paramètres du système
260
1 Web API 1.61 Paramètres du système
261
1 Web API 1.61 Paramètres du système
262
1 Web API 1.61 Paramètres du système
263
1 Web API 1.62 Paramètres de l’utilisateur
You can manipulate user settings by interacting with the userSettings resource. A user setting is a
simple key-value pair, where both the key and the value are plain text strings. The user setting
264
1 Web API 1.62 Paramètres de l’utilisateur
will be linked to the user who is authenticated for the Web API request. To return a list of all user
settings, you can send a GET request to the following URL:
/api/33/userSettings
User settings not set by the user, will fall back to the equivalent system setting. To only return the
values set explicitly by the user, you can append ?useFallback=false to the above URL, like this:
/api/33/userSettings?useFallback=false
To save or update a setting for the currently authenticated user you can make a POST request to
the following URL:
/api/33/userSettings/my-key?value=my-val
You can specify the user for which to save the setting explicitly with this syntax:
/api/33/userSettings/my-key?user=username&value=my-val
Alternatively, you can submit the setting value as the request body, where content type is set to
“text/plain”. As an example, you can use curl like this:
curl "[Link]
key" -d "My long value"
-H "Content-Type: text/plain" -u
admin:district
As an example, to set the UI locale of the current user to French you can use the following
command.
curl "[Link]
keyUiLocale?value=fr"
-X POST -u
admin:district
You should replace my-key with your real key and my-val with your real value. To retrieve the
value for a given key in plain text you can make a GET request to the following URL:
/api/33/userSettings/my-key
To delete a user setting, you can make a DELETE request to the URL similar to the one used
above for retrieval.
User settings
265
1 Web API 1.63 Les unités d’organisation
The organisationUnits resource follows the standard conventions as other metadata resources in
DHIS2. This resource supports some additional query parameters.
To get a list of organisation units you can use the following resource.
/api/33/organisationUnits
266
1 Web API 1.63.2 Get organisation unit with relations
To get an organisation unit with related organisation units you can use the following resource.
/api/33/organisationUnits/{id}
The dataSets resource follows the standard conventions as other metadata resources in DHIS2.
This resource supports some additional query parameters.
/api/33/dataSets
To retrieve the version of a data set you can issue a GET request:
267
1 Web API 1.64.1 DataSet Notification Template
GET /api/33/dataSets/<uid>/version
To bump (increase by one) the version of a data set you can issue a POST request:
POST /api/33/dataSets/<uid>/version
The dataset notification templates resource follows the standard conventions as other metadata
resources in DHIS2.
GET /api/33/dataSetNotficationTemplates
To retrieve data set notification template you can issue a GET request:
GET /api/33/dataSetNotficationTemplates/<uid>
To add data set notification template you can issue a POST request:
POST /api/33/dataSetNotficationTemplates
To delete data set notification template you can issue a DELETE request:
DELETE /api/33/dataSetNotficationTemplates/<uid>
{
"name":
"dataSetNotificationTemplate1",
"notificationTrigger":
"COMPLETION",
"relativeScheduledDays":
0,
"notificationRecipient":
"ORGANISATION_UNIT_CONTACT",
"dataSets":
[
{
"id":
"eZDhcZi6FLP"
}
],
"deliveryChannels":
["SMS"],
"subjectTemplate":
"V{data_name}",
"messageTemplate": "V{data_name}
V{complete_registration_period}",
268
1 Web API 1.65 Filled organisation unit levels
"sendStrategy":
"SINGLE_NOTIFICATION"
}
GET /api/33/filledOrganisationUnitLevels
To set the organisation unit levels you can issue a POST request with a JSON payload looking like
this.
"organisationUnitLevels":
[
{
"name":
"National",
"level":
1,
"offlineLevels":
3
},
{
"name":
"District",
"level":
2
},
{
"name":
"Chiefdom",
"level":
3
},
{
"name":
"Facility",
"level":
4
}
]
}
To do functional testing with curl you can issue the following command.
269
1 Web API 1.66 Static content
The staticContent resource allows you to upload and retrieve custom logos used in DHIS2. The
resource lets the user upload a file with an associated key, which can later be retrieved using the
key. Only PNG files are supported and can only be uploaded to the logo_banner and
logo_front keys.
/api/33/staticContent
Key Description
logo_banner Logo in the application top menu on the left
side.
logo_front Logo on the login-page above the login
form.
POST /api/33/staticContent/<key>
Uploading multiple files with the same key will overwrite the existing file. This way, retrieving a
file for any given key will only return the latest file uploaded.
GET /api/33/staticContent/<key>
• Adding “Accept: text/html” to the HTTP header.*__ In this case, the endpoint will return a
default image if nothing is defined. Will return an image stream when a custom or default
image is found.
curl "[Link]
staticContent/logo_front"
-H "Accept: text/html" -L -u
admin:district
• Adding “Accept: application/json” to the HTTP header.*__ With this parameter set, the
endpoint will never return a default image if the custom logo is not found. Instead, an
error message will be returned. When the custom image is found this endpoint will return
a JSON response containing the path/URL to the respective image.
270
1 Web API 1.67 Configuration
curl "[Link]
staticContent/logo_front"
-H "Accept: application/json" -L -u
admin:district
"images":
{
"png": "[Link]
staticContent/logo_front"
}
}
"httpStatus":
"Not
Found",
"httpStatusCode":
404,
"status":
"ERROR",
"message": "No
custom
file
found."
}
To use custom logos, you need to enable the corresponding system settings by setting it to true.
If the corresponding setting is false, the default logo will be served.
1.67 Configuration
To access configuration you can interact with the configuration resource. You can get XML and
JSON responses through the Accept header or by using the .json or .xml extensions. You can GET
all properties of the configuration from:
/api/33/configuration
You can send GET and POST requests to the following specific resources:
GET /api/33/configuration/systemId
271
1 Web API 1.68 Read-Only configuration service
For the CORS whitelist configuration you can make a POST request with an array of URLs to
whitelist as payload using “application/json” as content-type, for instance:
["[Link]",
"[Link]",
"[Link]"]
For POST requests, the configuration value should be sent as the request payload as text. The
following table shows appropriate configuration values for each property.
Configuration values
As an example, to set the feedback recipients user group you can invoke the following curl
command:
curl "localhost/api/33/configuration/
feedbackRecipients" -d "wl5cDMuUhmF"
-H "Content-Type:text/plain"-u
admin:district
To access configuration you can now use read-only service. This service will provide read-only
access to UserSettings, SystemSettings and DHIS2 server configurations You can get XML and
JSON responses through the Accept header. You can GET all settings from:
/api/33/configuration/settings
272
1 Web API 1.69 Internationalization
GET /api/33/configuration/settings/filter?type=USER_SETTING
GET /api/33/configuration/settings/filter?type=CONFIGURATION
GET /api/33/configuration/settings/filter?type=USER_SETTING&type=SYSTEM_SETTING
SettingType values
Value Description
USER_SETTING To get user settings
SYSTEM_SETTING To get system settings
CONFIGURATION To get DHIS server settings
Note
Fields which are confidential will be provided in the output but without
values.
1.69 Internationalization
In order to retrieve key-value pairs for translated strings you can use the i18n resource.
/api/33/i18n
The endpoint is located at /api/i18n and the request format is a simple array of the key-value
pairs:
["access_denied",
"uploading_data_notification"]
The request must be of type POST and use application/json as content-type. An example using
curl, assuming the request data is saved as a file [Link]:
"access_denied":
"Access
denied",
"uploading_data_notification": "Uploading locally stored
data to the server"
}
273
1 Web API 1.70 SVG conversion
The Web API provides a resource which can be used to convert SVG content into more widely
used formats such as PNG and PDF. Ideally this conversion should happen on the client side, but
not all client side technologies are capable of performing this task. Currently PNG and PDF output
formats are supported. The SVG content itself should be passed with a svg query parameter, and
an optional query parameter filename can be used to specify the filename of the response
attachment file. Note that the file extension should be omitted. For PNG you can send a POST
request to the following URL with Content-type application/x-www-form-urlencoded,
identical to a regular HTML form submission.
api/[Link]
For PDF you can send a POST request to the following URL with content-type application/x-
www-form-urlencoded.
api/[Link]
Query parameters
Query
Required Description
parameter
svg Yes The SVG content
filename No The file name for the returned attachment without file
extension
Tracker Web API consists of 3 endpoints that have full CRUD (create, read, update, delete)
support. The 3 endpoints are /api/trackedEntityInstances, /api/enrollments and /
api/events and they are responsible for tracked entity instance, enrollment and event items.
Tracked entity instances have full CRUD support in the API. Together with the API for enrollment
most operations needed for working with tracked entity instances and programs are supported.
/api/33/trackedEntityInstances
For creating a new person in the system, you will be working with the trackedEntityInstances
resource. A template payload can be seen below:
{
"trackedEntity":
"tracked-
entity-id",
"orgUnit":
"org-
unit-
id",
274
1 Web API 1.71.1 Tracked entity instance management
"geometry":
"<Geo
JSON>",
"attributes":
[
{
"attribute":
"attribute-
id",
"value":
"attribute-
value"
}
]
}
The field “geometry” accepts a GeoJson object, where the type of the GeoJson have to match the
featureType of the TrackedEntityType definition. An example GeoJson object looks like this:
"type":
"Point",
"coordinates":
[1,
1]
}
The “coordinates” field was introduced in 2.29, and accepts a coordinate or a polygon as a value.
For getting the IDs for relationship and attributes you can have a look at the respective
resources relationshipTypes, trackedEntityAttributes. To create a tracked entity
instance you must use the HTTP POST method. You can post the payload the following URL:
/api/trackedEntityInstances
For example, let us create a new instance of a person tracked entity and specify its first name and
last name attributes:
"trackedEntity":
"nEenWmSyUEp",
"orgUnit":
"DiszpKrYNg8",
"attributes":
[
{
"attribute":
"w75KJ2mc4zz",
"value":
"Joe"
},
{
"attribute":
"zDhUuAYrxNC",
275
1 Web API 1.71.1 Tracked entity instance management
"value":
"Smith"
}
]
}
To push this to the server you can use the cURL command like this:
To create multiple instances in one request you can wrap the payload in an outer array like this
and POST to the same resource as above:
"trackedEntityInstances":
[
{
"trackedEntity":
"nEenWmSyUEp",
"orgUnit":
"DiszpKrYNg8",
"attributes":
[
"attribute":
"w75KJ2mc4zz",
"value":
"Joe"
},
"attribute":
"zDhUuAYrxNC",
"value":
"Smith"
}
]
},
{
"trackedEntity":
"nEenWmSyUEp",
"orgUnit":
"DiszpKrYNg8",
"attributes":
[
"attribute":
"w75KJ2mc4zz",
"value":
"Jennifer"
},
276
1 Web API 1.71.1 Tracked entity instance management
"attribute":
"zDhUuAYrxNC",
"value":
"Johnson"
}
]
}
]
}
The system does not allow the creation of a tracked entity instance (as well as enrollment and
event) with a UID that was already used in the system. That means that UIDs cannot be reused.
For updating a tracked entity instance, the payload is equal to the previous section. The
difference is that you must use the HTTP PUT method for the request when sending the payload.
You will also need to append the person identifier to the trackedEntityInstances resource in the
URL like this, where <tracked-entity-instance-identifier> should be replaced by the
identifier of the tracked entity instance:
/api/trackedEntityInstances/<tracked-entity-instance-id>
The payload has to contain all, even non-modified, attributes and relationships. Attributes or
relationships that were present before and are not present in the current payload any more will
be removed from the system. This means that if attributes/relationships are empty in the current
payload, all existing attributes/relationships will be deleted from the system. From 2.31, it is
possible to ignore empty attributes/relationships in the current payload. A request parameter of
ignoreEmptyCollection set to true can be used in case you do not wish to send in any
attributes/relationships and also do not want them to be deleted from the system.
It is not allowed to update an already deleted tracked entity instance. Also, it is not allowed to
mark a tracked entity instance as deleted via an update request. The same rules apply to
enrollments and events.
In order to delete a tracked entity instance, make a request to the URL identifying the tracked
entity instance with the DELETE method. The URL is equal to the one above used for update.
It is also possible to both create (and update) a tracked entity instance and at the same time
enroll into a program.
{
"trackedEntity":
"tracked-
entity-id",
"orgUnit":
"org-
unit-
id",
277
1 Web API 1.71.1 Tracked entity instance management
"attributes":
[
{
"attribute":
"attribute-
id",
"value":
"attribute-
value"
}
],
"enrollments":
[
{
"orgUnit":
"org-
unit-
id",
"program":
"program-
id",
"enrollmentDate":
"2013-09-17",
"incidentDate":
"2013-09-17"
},
{
"orgUnit":
"org-
unit-
id",
"program":
"program-
id",
"enrollmentDate":
"2013-09-17",
"incidentDate":
"2013-09-17"
}
]
}
You would send this to the server as you would normally when creating or updating a new
tracked entity instance.
[Link] Complete example of payload including: tracked entity instance, enrollment and event
It is also possible to create (and update) a tracked entity instance, at the same time enroll into a
program and create an event.
278
1 Web API 1.71.1 Tracked entity instance management
"trackedEntityType":
"nEenWmSyUEp",
"orgUnit":
"DiszpKrYNg8",
"attributes":
[
{
"attribute":
"w75KJ2mc4zz",
"value":
"Joe"
},
{
"attribute":
"zDhUuAYrxNC",
"value":
"Rufus"
},
{
"attribute":
"cejWyOfXge6",
"value":
"Male"
}
],
"enrollments":
[
{
"orgUnit":
"DiszpKrYNg8",
"program":
"ur1Edk5Oe2n",
"enrollmentDate":
"2017-09-15",
"incidentDate":
"2017-09-15",
"events":
[
"program":
"ur1Edk5Oe2n",
"orgUnit":
"DiszpKrYNg8",
"eventDate":
"2017-10-17",
"status":
"COMPLETED",
"storedBy":
"admin",
"programStage":
"EPEcjy3FWmI",
279
1 Web API 1.71.1 Tracked entity instance management
"coordinate":
{
"latitude":
"59.8",
"longitude":
"10.9"
},
"dataValues":
[
"dataElement":
"qrur9Dvnyt5",
"value":
"22"
},
"dataElement":
"oZg33kd9taw",
"value":
"Male"
}
]
},
"program":
"ur1Edk5Oe2n",
"orgUnit":
"DiszpKrYNg8",
"eventDate":
"2017-10-17",
"status":
"COMPLETED",
"storedBy":
"admin",
"programStage":
"EPEcjy3FWmI",
"coordinate":
{
"latitude":
"59.8",
"longitude":
"10.9"
},
"dataValues":
[
"dataElement":
"qrur9Dvnyt5",
"value":
"26"
},
280
1 Web API 1.71.1 Tracked entity instance management
"dataElement":
"oZg33kd9taw",
"value":
"Female"
}
]
}
]
}
]
}
You would send this to the server as you would normally when creating or updating a new
tracked entity instance.
Tracked entity instance attributes that are using automatic generation of unique values have
three endpoints that are used by apps. The endpoints are all used for generating and reserving
values.
In 2.29 we introduced TextPattern for defining and generating these patterns. All existing
patterns will be converted to a valid TextPattern when upgrading to 2.29.
Note
As of 2.29, all these endpoints will require you to include any variables
reported by the requiredValues endpoint listed as required. Existing
patterns, consisting of only #, will be upgraded to the new TextPattern
syntax RANDOM(<old-pattern>). The RANDOM segment of the
TextPattern is not a required variable, so this endpoint will work as
before for patterns defined before 2.29.
A TextPattern can contain variables that change based on different factors. Some of these factors
will be unknown to the server, so the values for these variables have to be supplied when
generating and reserving values.
This endpoint will return a map of required and optional values, that the server will inject into the
TextPattern when generating new values. Required variables have to be supplied for the
generation, but optional variables should only be supplied if you know what you are doing.
GET /api/33/trackedEntityAttributes/Gs1ICEQTPlG/requiredValues
281
1 Web API 1.71.1 Tracked entity instance management
"REQUIRED":
["ORG_UNIT_CODE"],
"OPTIONAL":
["RANDOM"]
}
Online web apps and other clients that want to generate a value that will be used right away can
use the simple generate endpoint. This endpoint will generate a value that is guaranteed to be
unique at the time of generation. The value is also guaranteed not to be reserved. As of 2.29, this
endpoint will also reserve the value generated for 3 days.
If your TextPattern includes required values, you can pass them as parameters like the example
below:
The expiration time can also be overridden at the time of generation, by adding the ?
expiration=<number-of-days> to the request.
GET /api/33/trackedEntityAttributes/Gs1ICEQTPlG/generate?ORG_UNIT_CODE=OSLO
{
"ownerObject":
"TRACKEDENTITYATTRIBUTE",
"ownerUid":
"Gs1ICEQTPlG",
"key":
"RANDOM(X)-
OSL",
"value":
"C-
OSL",
"created":
"2018-03-02T[Link].680",
"expiryDate":
"2018-03-05T[Link].678"
}
The generate and reserve endpoint is used by offline clients that need to be able to register
tracked entities with unique ids. They will reserve a number of unique ids that this device will
then use when registering new tracked entity instances. The endpoint is called to retrieve a
number of tracked entity instance reserved values. An optional parameter numberToReserve
specifies how many ids to generate (default is 1).
If your TextPattern includes required values, you can pass them as parameters like the example
below:
Similar to the /generate endpoint, this endpoint can also specify the expiration time in the same
way. By adding the ?expiration=<number-of-days> you can override the default 60 days.
282
1 Web API 1.71.1 Tracked entity instance management
GET /api/33/trackedEntityAttributes/Gs1ICEQTPlG/generateAndReserve?
numberToReserve=3&ORG_UNIT_CODE=OSLO
[
{
"ownerObject":
"TRACKEDENTITYATTRIBUTE",
"ownerUid":
"Gs1ICEQTPlG",
"key":
"RANDOM(X)-
OSL",
"value":
"B-
OSL",
"created":
"2018-03-02T[Link].175",
"expiryDate":
"2018-05-01T[Link].174"
},
{
"ownerObject":
"TRACKEDENTITYATTRIBUTE",
"ownerUid":
"Gs1ICEQTPlG",
"key":
"RANDOM(X)-
OSL",
"value":
"Q-
OSL",
"created":
"2018-03-02T[Link].175",
"expiryDate":
"2018-05-01T[Link].174"
},
{
"ownerObject":
"TRACKEDENTITYATTRIBUTE",
"ownerUid":
"Gs1ICEQTPlG",
"key":
"RANDOM(X)-
OSL",
"value":
"S-
OSL",
"created":
"2018-03-02T[Link].175",
"expiryDate":
"2018-05-01T[Link].174"
}
]
Reserved values are currently not accessible through the api, however, they are returned by the
generate and generateAndReserve endpoints. The following table explains the properties of
the reserved value object:
283
1 Web API 1.71.1 Tracked entity instance management
[Link].5
Reserved values
Property Description
ownerObject The metadata type referenced when generating and reserving the value.
Currently only TRACKEDENTITYATTRIBUTE is supported.
ownerUid The uid of the metadata object referenced when generating and reserving
the value.
key A partially generated value where generated segments are not yet added.
value The fully resolved value reserved. This is the value you send to the server
when storing data.
created The timestamp when the reservation was made
expiryDate The timestamp when the reservation will no longer be reserved
Expired reservations are removed daily. If a pattern changes, values that were already reserved
will be accepted when storing data, even if they don’t match the new pattern, as long as the
reservation has not expired.
Working with image attributes is a lot like working with file data values. The value of an attribute
with the image value type is the id of the associated file resource. A GET request to the /api/
trackedEntityInstances/<entityId>/<attributeId>/image endpoint will return the
actual image. The optional height and width parameters can be used to specify the dimensions of
the image.
curl "[Link]
height=200&width=200"
>
[Link]
The API also supports a dimension parameter. It can take three possible values: small (254x254),
medium (512x512), large (1024x1024) or original. Image type attributes will be stored in pre-
generated sizes and will be furnished upon request based on the value of the dimension
parameter.
curl "[Link]
dimension=medium"
To query for tracked entity instances you can interact with the /api/trackedEntityInstances
resource.
/api/33/trackedEntityInstances
284
1 Web API 1.71.1 Tracked entity instance management
285
1 Web API 1.71.1 Tracked entity instance management
The available organisation unit selection modes are explained in the following table.
Mode Description
SELECTED Organisation units defined in the request.
CHILDREN The selected organisation units and the immediate children, i.e. the
organisation units at the level below.
DESCENDANTS The selected organisation units and all children, i.e. all organisation
units in the sub-hierarchy.
ACCESSIBLE The data view organisation units associated with the current user and
all children, i.e. all organisation units in the sub-hierarchy. Will fall back
to data capture organisation units associated with the current user if
the former is not defined.
CAPTURE The data capture organisation units associated with the current user
and all children, i.e. all organisation units in the sub-hierarchy.
ALL All organisation units in the system. Requires the ALL authority.
The query is case insensitive. The following rules apply to the query parameters.
• At least one organisation unit must be specified using the ou parameter (one or many), or
ouMode=ALL must be specified.
• Only one of the program and trackedEntity parameters can be specified (zero or one).
A query for all instances associated with a specific organisation unit can look like this:
/api/33/[Link]?ou=DiszpKrYNg8
To query for instances using one attribute with a filter and one attribute without a filter, with one
organisation unit using the descendant organisation unit query mode:
/api/33/[Link]?filter=zHXD5Ve1Efw:EQ:A
&filter=AMpUYgxuCaE&ou=DiszpKrYNg8;yMCshbaVExv
286
1 Web API 1.71.1 Tracked entity instance management
A query for instances where one attribute is included in the response and one attribute is used
as a filter:
/api/33/[Link]?filter=zHXD5Ve1Efw:EQ:A
&filter=AMpUYgxuCaE:LIKE:Road&ou=DiszpKrYNg8
A query where multiple operand and filters are specified for a filter item:
api/33/[Link]?ou=DiszpKrYNg8&program=ur1Edk5Oe2n
&filter=lw1SqmMlnfh:GT:150:LT:190
api/33/[Link]?ou=DiszpKrYNg8
&filter=dv3nChNSIxy:IN:Scott;Jimmy;Santiago
To constrain the response to instances which are part of a specific program you can include a
program query parameter:
api/33/[Link]?filter=zHXD5Ve1Efw:EQ:A&ou=O6uvpzGd5pu
&ouMode=DESCENDANTS&program=ur1Edk5Oe2n
api/33/[Link]?filter=zHXD5Ve1Efw:EQ:A&ou=O6uvpzGd5pu
&program=ur1Edk5Oe2n&programStartDate=2013-01-01&programEndDate=2013-09-01
To constrain the response to instances of a specific tracked entity you can include a tracked
entity query parameter:
api/33/[Link]?filter=zHXD5Ve1Efw:EQ:A&ou=O6uvpzGd5pu
&ouMode=DESCENDANTS&trackedEntity=cyl5vuJ5ETQ
By default the instances are returned in pages of size 50, to change this you can use the page and
pageSize query parameters:
api/33/[Link]?filter=zHXD5Ve1Efw:EQ:A&ou=O6uvpzGd5pu
&ouMode=DESCENDANTS&page=2&pageSize=3
Filter operators
Operator Description
EQ Equal to
GT Greater than
GE Greater than or equal to
287
1 Web API 1.71.1 Tracked entity instance management
Operator Description
LT Less than
LE Less than or equal to
NE Not equal to
LIKE Like (free text match)
IN Equal to one of multiple values separated by “;”
This resource supports JSON, JSONP, XLS and CSV resource representations.
• json (application/json)
• jsonp (application/javascript)
• xml (application/xml)
The response in JSON/XML is in object format and can look like the following. Please note that
field filtering is supported, so if you want a full view, you might want to add fields=* to the
query:
"trackedEntityInstances":
[
{
"lastUpdated":
"2014-03-28
[Link].399",
"trackedEntity":
"cyl5vuJ5ETQ",
"created":
"2014-03-26
[Link].997",
"orgUnit":
"ueuQlqb8ccl",
"trackedEntityInstance":
"tphfdyIiVL6",
"relationships":
[],
"attributes":
[
"displayName":
"Address",
"attribute":
"AMpUYgxuCaE",
"type":
"string",
"value":
"2033
Akasia St"
},
288
1 Web API 1.71.1 Tracked entity instance management
"displayName":
"TB
number",
"attribute":
"ruQQnf6rswq",
"type":
"string",
"value": "1Z 989
408 56 9356 521 9"
},
"displayName":
"Weight in
kg",
"attribute":
"OvY4VVhSDeJ",
"type":
"number",
"value":
"68.1"
},
"displayName":
"Email",
"attribute":
"NDXw0cluzSw",
"type":
"string",
"value":
"LiyaEfrem@[Link]"
},
"displayName":
"Gender",
"attribute":
"cejWyOfXge6",
"type":
"optionSet",
"value":
"Female"
},
"displayName":
"Phone
number",
"attribute":
"P2cwLGskgxn",
"type":
"phoneNumber",
"value":
"085
813
9447"
},
289
1 Web API 1.71.1 Tracked entity instance management
"displayName":
"First
name",
"attribute":
"dv3nChNSIxy",
"type":
"string",
"value":
"Liya"
},
"displayName":
"Last
name",
"attribute":
"hwlRTFIFSUq",
"type":
"string",
"value":
"Efrem"
},
"code":
"Height
in cm",
"displayName":
"Height in
cm",
"attribute":
"lw1SqmMlnfh",
"type":
"number",
"value":
"164"
},
"code":
"City",
"displayName":
"City",
"attribute":
"VUvgVao8Y5z",
"type":
"string",
"value":
"Kranskop"
},
"code":
"State",
290
1 Web API 1.71.1 Tracked entity instance management
"displayName":
"State",
"attribute":
"GUOBQt5K2WI",
"type":
"number",
"value":
"KwaZulu-
Natal"
},
"code":
"Zip
code",
"displayName":
"Zip
code",
"attribute":
"n9nUvfpTsxQ",
"type":
"number",
"value":
"3282"
},
{
"code":
"National
identifier",
"displayName":
"National
identifier",
"attribute":
"AuPLng5hLbE",
"type":
"string",
"value":
"465700042"
},
"code":
"Blood
type",
"displayName":
"Blood
type",
"attribute":
"H9IlTX2X6SL",
"type":
"string",
"value":
"B-"
},
"code":
"Latitude",
291
1 Web API 1.71.1 Tracked entity instance management
"displayName":
"Latitude",
"attribute":
"Qo571yj6Zcn",
"type":
"string",
"value":
"-30.659626"
},
"code":
"Longitude",
"displayName":
"Longitude",
"attribute":
"RG7uGl4w5Jq",
"type":
"string",
"value":
"26.916172"
}
]
}
]
}
To query for tracked entity instances you can interact with the /api/trackedEntityInstances/grid
resource. There are two types of queries: One where a query query parameter and optionally
attribute parameters are defined, and one where attribute and filter parameters are defined.
This endpoint uses a more compact “grid” format, and is an alternative to the query in the
previous section.
/api/33/trackedEntityInstances/query
292
1 Web API 1.71.1 Tracked entity instance management
293
1 Web API 1.71.1 Tracked entity instance management
The available organisation unit selection modes are explained in the following table.
Mode Description
SELECTED Organisation units defined in the request.
CHILDREN Immediate children, i.e. only the first level below, of the organisation
units defined in the request.
DESCENDANTS All children, i.e. at only levels below, e.g. including children of children,
of the organisation units defined in the request.
ACCESSIBLE All descendants of the data view organisation units associated with the
current user. Will fall back to data capture organisation units associated
with the current user if the former is not defined.
CAPTURE The data capture organisation units associated with the current user
and all children, i.e. all organisation units in the sub-hierarchy.
ALL All organisation units in the system. Requires authority.
Note that you can specify “attribute” with filters or directly using the “filter” params for
constraining the instances to return.
• If “query” is specified without any attributes or program, then all attributes that are marked
as “Display in List without Program” is included in the response.
• If program is specified, all the attributes linked to the program will be included in the
response.
• If tracked entity type is specified, then all tracked entity type attributes will be included in
the response.
You can specify queries with words separated by space - in that situation the system will query
for each word independently and return records where each word is contained in any attribute. A
294
1 Web API 1.71.1 Tracked entity instance management
query item can be specified once as an attribute and once as a filter if needed. The query is case
insensitive. The following rules apply to the query parameters.
• At least one organisation unit must be specified using the ou parameter (one or many), or
ouMode=ALL must be specified.
• Only one of the program and trackedEntity parameters can be specified (zero or one).
A query for all instances associated with a specific organisation unit can look like this:
/api/33/trackedEntityInstances/[Link]?ou=DiszpKrYNg8
A query on all attributes for a specific value and organisation unit, using an exact word match:
/api/33/trackedEntityInstances/[Link]?query=scott&ou=DiszpKrYNg8
A query on all attributes for a specific value, using a partial word match:
/api/33/trackedEntityInstances/[Link]?query=LIKE:scott&ou=DiszpKrYNg8
You can query on multiple words separated by the URL character for space which is %20, will use
a logical AND query for each word:
/api/33/trackedEntityInstances/[Link]?query=isabel%20may&ou=DiszpKrYNg8
/api/33/trackedEntityInstances/[Link]?query=isabel
&attribute=dv3nChNSIxy&attribute=AMpUYgxuCaE&ou=DiszpKrYNg8
To query for instances using one attribute with a filter and one attribute without a filter, with one
organisation unit using the descendants organisation unit query mode:
/api/33/trackedEntityInstances/[Link]?attribute=zHXD5Ve1Efw:EQ:A
&attribute=AMpUYgxuCaE&ou=DiszpKrYNg8;yMCshbaVExv
295
1 Web API 1.71.1 Tracked entity instance management
A query for instances where one attribute is included in the response and one attribute is used
as a filter:
/api/33/trackedEntityInstances/[Link]?attribute=zHXD5Ve1Efw:EQ:A
&filter=AMpUYgxuCaE:LIKE:Road&ou=DiszpKrYNg8
A query where multiple operand and filters are specified for a filter item:
/api/33/trackedEntityInstances/[Link]?ou=DiszpKrYNg8&program=ur1Edk5Oe2n
&filter=lw1SqmMlnfh:GT:150:LT:190
/api/33/trackedEntityInstances/[Link]?ou=DiszpKrYNg8
&attribute=dv3nChNSIxy:IN:Scott;Jimmy;Santiago
To constrain the response to instances which are part of a specific program you can include a
program query parameter:
/api/33/trackedEntityInstances/[Link]?filter=zHXD5Ve1Efw:EQ:A
&ou=O6uvpzGd5pu&ouMode=DESCENDANTS&program=ur1Edk5Oe2n
/api/33/trackedEntityInstances/[Link]?filter=zHXD5Ve1Efw:EQ:A
&ou=O6uvpzGd5pu&program=ur1Edk5Oe2n&programStartDate=2013-01-01
&programEndDate=2013-09-01
To constrain the response to instances of a specific tracked entity you can include a tracked
entity query parameter:
/api/33/trackedEntityInstances/[Link]?attribute=zHXD5Ve1Efw:EQ:A
&ou=O6uvpzGd5pu&ouMode=DESCENDANTS&trackedEntity=cyl5vuJ5ETQ
By default the instances are returned in pages of size 50, to change this you can use the page and
pageSize query parameters:
/api/33/trackedEntityInstances/[Link]?attribute=zHXD5Ve1Efw:EQ:A
&ou=O6uvpzGd5pu&ouMode=DESCENDANTS&page=2&pageSize=3
To query for instances which have events of a given status within a given time span:
/api/33/trackedEntityInstances/[Link]?ou=O6uvpzGd5pu
&program=ur1Edk5Oe2n&eventStatus=LATE_VISIT
&eventStartDate=2014-01-01&eventEndDate=2014-09-01
296
1 Web API 1.71.1 Tracked entity instance management
Filter operators
Operator Description
EQ Equal to
GT Greater than
GE Greater than or equal to
LT Less than
LE Less than or equal to
NE Not equal to
LIKE Like (free text match)
IN Equal to one of multiple values separated by “;”
This resource supports JSON, JSONP, XLS and CSV resource representations.
• json (application/json)
• jsonp (application/javascript)
• xml (application/xml)
• csv (application/csv)
• xls (application/[Link]-excel)
The response in JSON comes is in a tabular format and can look like the following. The headers
section describes the content of each column. The instance, created, last updated, org unit and
tracked entity columns are always present. The following columns correspond to attributes
specified in the query. The rows section contains one row per instance.
"headers":
[
{
"name":
"instance",
"column":
"Instance",
"type":
"[Link]"
},
{
"name":
"created",
"column":
"Created",
"type":
"[Link]"
},
{
"name":
"lastupdated",
297
1 Web API 1.71.1 Tracked entity instance management
"column":
"Last
updated",
"type":
"[Link]"
},
{
"name":
"ou",
"column":
"Org
unit",
"type":
"[Link]"
},
{
"name":
"te",
"column":
"Tracked
entity",
"type":
"[Link]"
},
{
"name":
"zHXD5Ve1Efw",
"column":
"Date of
birth
type",
"type":
"[Link]"
},
{
"name":
"AMpUYgxuCaE",
"column":
"Address",
"type":
"[Link]"
}
],
"metaData":
{
"names":
{
"cyl5vuJ5ETQ":
"Person"
}
},
"width":
7,
"height":
7,
"rows":
[
[
"yNCtJ6vhRJu",
298
1 Web API 1.71.1 Tracked entity instance management
"2013-09-08
[Link].0",
"2014-01-09
[Link].19",
"DiszpKrYNg8",
"cyl5vuJ5ETQ",
"A",
"21
Kenyatta
Road"
],
[
"fSofnQR6lAU",
"2013-09-08
[Link].0",
"2014-01-09
[Link].62",
"DiszpKrYNg8",
"cyl5vuJ5ETQ",
"A",
"56
Upper
Road"
],
[
"X5wZwS5lgm2",
"2013-09-08
[Link].0",
"2014-01-09
[Link].11",
"DiszpKrYNg8",
"cyl5vuJ5ETQ",
"A",
"56
Main
Road"
],
[
"pCbogmlIXga",
"2013-09-08
[Link].0",
"2014-01-09
[Link].02",
"DiszpKrYNg8",
"cyl5vuJ5ETQ",
"A",
"12 Lower
Main
Road"
],
[
"WnUXrY4XBMM",
"2013-09-08
[Link].0",
"2014-01-09
[Link].97",
"DiszpKrYNg8",
299
1 Web API 1.71.1 Tracked entity instance management
"cyl5vuJ5ETQ",
"A",
"13
Main
Road"
],
[
"xLNXbDs9uDF",
"2013-09-08
[Link].0",
"2014-01-09
[Link].66",
"DiszpKrYNg8",
"cyl5vuJ5ETQ",
"A",
"14
Mombasa
Road"
],
[
"foc5zag6gbE",
"2013-09-08
[Link].0",
"2014-01-09
[Link].93",
"DiszpKrYNg8",
"cyl5vuJ5ETQ",
"A",
"15
Upper
Hill"
]
]
}
To create, read, update and delete tracked entity instance filters you can interact with the /api/
trackedEntityInstanceFilters resource.
/api/33/trackedEntityInstanceFilters
For creating and updating a tracked entity instance filter in the system, you will be working with
the trackedEntityInstanceFilters resource. The tracked entity instance filter definitions are used in
the Tracker Capture app to display relevant predefined “Working lists” in the tracker user
interface.
Payload
300
1 Web API 1.71.1 Tracked entity instance management
301
1 Web API 1.71.2 Enrollment management
Period definition
To query for tracked entity instance filters in the system, you can interact with the /api/
trackedEntityInstanceFilters resource.
Enrollments have full CRUD support in the API. Together with the API for tracked entity instances
most operations needed for working with tracked entity instances and programs are supported.
/api/33/enrollments
302
1 Web API 1.71.2 Enrollment management
For enrolling persons into a program, you will need to first get the identifier of the person from
the trackedEntityInstances resource. Then, you will need to get the program identifier from the
programs resource. A template payload can be seen below:
"trackedEntityInstance":
"ZRyCnJ1qUXS",
"orgUnit":
"ImspTQPwCqd",
"program":
"S8uo8AlvYMz",
"enrollmentDate":
"2013-09-17",
"incidentDate":
"2013-09-17"
}
This payload should be used in a POST request to the enrollments resource identified by the
following URL:
/api/33/enrollments
For cancelling or completing an enrollment, you can make a PUT request to the enrollments
resource, including the identifier and the action you want to perform. For cancelling an
enrollment for a tracked entity instance:
/api/33/enrollments/<enrollment-id>/cancelled
For completing an enrollment for a tracked entity instance you can make a PUT request to the
following URL:
/api/33/enrollments/<enrollment-id>/completed
For deleting an enrollment, you can make a DELETE request to the following URL:
/api/33/enrollments/<enrollment-id>
To query for enrollments you can interact with the /api/enrollments resource.
/api/33/enrollments
303
1 Web API 1.71.2 Enrollment management
The available organisation unit selection modes are explained in the following table.
Mode Description
SELECTED Organisation units defined in the request (default).
CHILDREN Immediate children, i.e. only the first level below, of the organisation
units defined in the request.
DESCENDANTS All children, i.e. at only levels below, e.g. including children of children,
of the organisation units defined in the request.
304
1 Web API 1.71.2 Enrollment management
Mode Description
ACCESSIBLE All descendants of the data view organisation units associated with the
current user. Will fall back to data capture organisation units associated
with the current user if the former is not defined.
ALL All organisation units in the system. Requires authority.
The query is case insensitive. The following rules apply to the query parameters.
• At least one organisation unit must be specified using the ou parameter (one or many), or
ouMode=ALL must be specified.
• Only one of the program and trackedEntity parameters can be specified (zero or one).
A query for all enrollments associated with a specific organisation unit can look like this:
/api/33/[Link]?ou=DiszpKrYNg8
To constrain the response to enrollments which are part of a specific program you can include a
program query parameter:
/api/33/[Link]?ou=O6uvpzGd5pu&ouMode=DESCENDANTS&program=ur1Edk5Oe2n
/api/33/[Link]?&ou=O6uvpzGd5pu&program=ur1Edk5Oe2n
&programStartDate=2013-01-01&programEndDate=2013-09-01
To constrain the response to enrollments of a specific tracked entity you can include a tracked
entity query parameter:
/api/33/[Link]?ou=O6uvpzGd5pu&ouMode=DESCENDANTS&trackedEntity=cyl5vuJ5ETQ
To constrain the response to enrollments of a specific tracked entity instance you can include a
tracked entity instance query parameter, in this case we have restricted it to available
enrollments viewable for current user:
/api/33/[Link]?ouMode=ACCESSIBLE&trackedEntityInstance=tphfdyIiVL6
By default the enrollments are returned in pages of size 50, to change this you can use the page
and pageSize query parameters:
/api/33/[Link]?ou=O6uvpzGd5pu&ouMode=DESCENDANTS&page=2&pageSize=3
305
1 Web API 1.71.3 Événements
This resource supports JSON, JSONP, XLS and CSV resource representations.
• json (application/json)
• jsonp (application/javascript)
• xml (application/xml)
The response in JSON/XML is in object format and can look like the following. Please note that
field filtering is supported, so if you want a full view, you might want to add fields=* to the
query:
"enrollments":
[
{
"lastUpdated":
"2014-03-28T[Link].512+0000",
"trackedEntity":
"cyl5vuJ5ETQ",
"created":
"2014-03-28T[Link].500+0000",
"orgUnit":
"DiszpKrYNg8",
"program":
"ur1Edk5Oe2n",
"enrollment":
"HLFOK0XThjr",
"trackedEntityInstance":
"qv0j4JBXQX0",
"followup":
false,
"enrollmentDate":
"2013-05-23T[Link].490+0000",
"incidentDate":
"2013-05-10T[Link].490+0000",
"status":
"ACTIVE"
}
]
}
1.71.3 Événements
/api/33/events
DHIS2 supports three kinds of events: single events with no registration (also referred to as
anonymous events), single event with registration and multiple events with registration.
Registration implies that the data is linked to a tracked entity instance which is identified using
some sort of identifier.
306
1 Web API 1.71.3 Événements
To send events to DHIS2 you must interact with the events resource. The approach to sending
events is similar to sending aggregate data values. You will need a program which can be looked
up using the programs resource, an orgUnit which can be looked up using the organisationUnits
resource, and a list of valid data element identifiers which can be looked up using the
dataElements resource. For events with registration, a tracked entity instance identifier is
required, read about how to get this in the section about the trackedEntityInstances resource.
For sending events to programs with multiple stages, you will need to also include the
programStage identifier, the identifiers for programStages can be found in the programStages
resource.
A simple single event with no registration example payload in XML format where we send events
from the “Inpatient morbidity and mortality” program for the “Ngelehun CHC” facility in the demo
database can be seen below:
<?xml version="1.0"
encoding="utf-8"?>
<event
program="eBAyeGv0exc"
orgUnit="DiszpKrYNg8"
eventDate="2013-05-17"
status="COMPLETED"
storedBy="admin">
<coordinate
latitude="59.8"
longitude="10.9" />
<dataValues>
<dataValue
dataElement="qrur9Dvnyt5"
value="22" />
<dataValue
dataElement="oZg33kd9taw"
value="Male" />
<dataValue
dataElement="msodh3rEMJa"
value="2013-05-18" />
</
dataValues>
</event>
To perform some testing we can save the XML payload as a file [Link] and send it as a
POST request to the events resource in the API using curl with the following command:
"program":
"eBAyeGv0exc",
"orgUnit":
"DiszpKrYNg8",
"eventDate":
"2013-05-17",
"status":
"COMPLETED",
307
1 Web API 1.71.3 Événements
"completedDate":
"2013-05-18",
"storedBy":
"admin",
"coordinate":
{
"latitude":
59.8,
"longitude":
10.9
},
"dataValues":
[
{
"dataElement":
"qrur9Dvnyt5",
"value":
"22"
},
{
"dataElement":
"oZg33kd9taw",
"value":
"Male"
},
{
"dataElement":
"msodh3rEMJa",
"value":
"2013-05-18"
}
]
}
To send this you can save it to a file called [Link] and use curl like this:
We also support sending multiple events at the same time. A payload in XML format might look
like this:
<?xml version="1.0"
encoding="utf-8"?>
<events>
<event
program="eBAyeGv0exc"
orgUnit="DiszpKrYNg8"
eventDate="2013-05-17"
status="COMPLETED"
storedBy="admin">
<coordinate
latitude="59.8"
longitude="10.9" />
308
1 Web API 1.71.3 Événements
<dataValues>
<dataValue
dataElement="qrur9Dvnyt5"
value="22" />
<dataValue
dataElement="oZg33kd9taw"
value="Male" />
</
dataValues>
</
event>
<event
program="eBAyeGv0exc"
orgUnit="DiszpKrYNg8"
eventDate="2013-05-17"
status="COMPLETED"
storedBy="admin">
<coordinate
latitude="59.8"
longitude="10.9" />
<dataValues>
<dataValue
dataElement="qrur9Dvnyt5"
value="26" />
<dataValue
dataElement="oZg33kd9taw"
value="Female" />
</
dataValues>
</
event>
</
events>
You will receive an import summary with the response which can be inspected in order to get
information about the outcome of the request, like how many values were imported successfully.
The payload in JSON format looks like this:
"events":
[
{
"program":
"eBAyeGv0exc",
"orgUnit":
"DiszpKrYNg8",
"eventDate":
"2013-05-17",
"status":
"COMPLETED",
"storedBy":
"admin",
"coordinate":
{
"latitude":
"59.8",
"longitude":
"10.9"
},
309
1 Web API 1.71.3 Événements
"dataValues":
[
"dataElement":
"qrur9Dvnyt5",
"value":
"22"
},
"dataElement":
"oZg33kd9taw",
"value":
"Male"
}
]
},
{
"program":
"eBAyeGv0exc",
"orgUnit":
"DiszpKrYNg8",
"eventDate":
"2013-05-17",
"status":
"COMPLETED",
"storedBy":
"admin",
"coordinate":
{
"latitude":
"59.8",
"longitude":
"10.9"
},
"dataValues":
[
"dataElement":
"qrur9Dvnyt5",
"value":
"26"
},
"dataElement":
"oZg33kd9taw",
"value":
"Female"
}
]
}
]
}
310
1 Web API 1.71.3 Événements
You can also use GeoJson to store any kind of geometry on your event. An example payload
using GeoJson instead of the former latitude and longitude properties can be seen here:
"program":
"eBAyeGv0exc",
"orgUnit":
"DiszpKrYNg8",
"eventDate":
"2013-05-17",
"status":
"COMPLETED",
"storedBy":
"admin",
"geometry":
{
"type":
"POINT",
"coordinates":
[59.8,
10.9]
},
"dataValues":
[
{
"dataElement":
"qrur9Dvnyt5",
"value":
"22"
},
{
"dataElement":
"oZg33kd9taw",
"value":
"Male"
},
{
"dataElement":
"msodh3rEMJa",
"value":
"2013-05-18"
}
]
}
As part of the import summary you will also get the identifier reference to the event you just
sent, together with a href element which points to the server location of this event. The table
below describes the meaning of each element.
311
1 Web API 1.71.3 Événements
Options (default
Parameter Type Required Description
first)
program string true Identifier of the single
event with no
registration program
orgUnit string true Identifier of the
organisation unit where
the event took place
eventDate date true The date of when the
event occurred
completedDate date false The date of when the
event is completed. If
not provided, the current
date is selected as the
event completed date
status enum false ACTIVE | Whether the event is
COMPLETED | complete or not
VISITED |
SCHEDULE |
OVERDUE |
SKIPPED
storedBy string false Defaults to current Who stored this event
user (can be username,
system-name, etc)
coordinate double false Refers to where the
event took place
geographically (latitude
and longitude)
dataElement string true Identifier of data
element
value string true Data value or measure
for this event
By default the orgUnit parameter will match on the ID, you can also select the orgUnit id
matching scheme by using the parameter orgUnitIdScheme=SCHEME, where the options are: ID,
UID, UUID, CODE, and NAME. There is also the ATTRIBUTE: scheme, which matches on a unique
metadata attribute value.
To update an existing event, the format of the payload is the same, but the URL you are posting
to must add the identifier to the end of the URL string and the request must be PUT.
The payload has to contain all, even non-modified, attributes. Attributes that were present before
and are not present in the current payload any more will be removed by the system.
It is not allowed to update an already deleted event. The same applies to tracked entity instance
and enrollment.
312
1 Web API 1.71.3 Événements
To delete an existing event, all you need is to send a DELETE request with an identifier reference
to the server you are using.
A user can be assigned to an event. This can be done by including the appropriate property in the
payload when updating or creating the event.
"assignedUser": "<id>"
The id refers to the if of the user. Only one user can be assigned to an event at a time.
User assignment must be enabled in the program stage before users can be assigned to events.
To get an existing event you can issue a GET request including the identifier like this:
This section explains how to read out the events that have been stored in the DHIS2 instance. For
more advanced uses of the event data, please see the section on event analytics. The output
format from the /api/events endpoint will match the format that is used to send events to it
(which the analytics event api does not support). Both XML and JSON are supported, either
through adding .json/.xml or by setting the appropriate Accept header. The query is paged by
default and the default page size is 50 events, field filtering works as it does for metadata, add
the fields parameter and include your wanted properties, i.e. ?fields=program,status.
313
1 Web API 1.71.3 Événements
314
1 Web API 1.71.3 Événements
order=orgUnitName:DESC
order=lastUpdated:ASC
315
1 Web API 1.71.3 Événements
Note
If the query contains neither attributeCC nor attributeCos, the
server returns events for all attribute option combos where the user
has read access.
[Link].1 Examples
/api/29/[Link]?orgUnit=YuQRtpLP10I&ouMode=CHILDREN
316
1 Web API 1.71.3 Événements
Query for all events with all descendants of a certain organisation unit, implying all organisation
units in the sub-hierarchy:
/api/33/[Link]?orgUnit=O6uvpzGd5pu&ouMode=DESCENDANTS
Query for all events with a certain program and organisation unit:
/api/33/[Link]?orgUnit=DiszpKrYNg8&program=eBAyeGv0exc
Query for all events with a certain program and organisation unit, sorting by due date ascending:
/api/33/[Link]?orgUnit=DiszpKrYNg8&program=eBAyeGv0exc&order=dueDate
Query for the 10 events with the newest event date in a certain program and organisation unit -
by paging and ordering by due date descending:
/api/33/[Link]?orgUnit=DiszpKrYNg8&program=eBAyeGv0exc
&order=eventDate:desc&pageSize=10&page=1
Query for all events with a certain program and organisation unit for a specific tracked entity
instance:
/api/33/[Link]?orgUnit=DiszpKrYNg8
&program=eBAyeGv0exc&trackedEntityInstance=gfVxE3ALA9m
Query for all events with a certain program and organisation unit older or equal to 2014-02-03:
/api/33/[Link]?orgUnit=DiszpKrYNg8&program=eBAyeGv0exc&endDate=2014-02-03
Query for all events with a certain program stage, organisation unit and tracked entity instance in
the year 2014:
/api/33/[Link]?orgUnit=DiszpKrYNg8&program=eBAyeGv0exc
&trackedEntityInstance=gfVxE3ALA9m&startDate=2014-01-01&endDate=2014-12-31
Query files associated with event data values. In the specific case of fetching an image file an
additional parameter can be provided to fetch the image with different dimensions. If dimension
is not provided, the system will return the original image. The parameter will be ignored in case
of fetching non-image files e.g pdf. Possible dimension values are small(254 x 254), medium(512
x 512), large(1024 x 1024) or original. Any value other than those mentioned will be discarded
and the original image will be returned.
/api/33/events/files?eventUid=hcmcWlYkg9u&dataElementUid=C0W4aFuVm4P&dimension=small
317
1 Web API 1.71.3 Événements
Retrieve events with specified Organisation unit and Program, and use Attribute:Gq0oWTf2DtN
as identifier scheme
/api/events?orgUnit=DiszpKrYNg8&program=lxAQ7Zs9VYR&idScheme=Attribute:Gq0oWTf2DtN
Retrieve events with specified Organisation unit and Program, and use UID as identifier scheme
for orgUnits, Code as identifier scheme for Program stages, and Attribute:Gq0oWTf2DtN as
identifier scheme for the rest of the metadata with assigned attribute.
api/[Link]?
orgUnit=DiszpKrYNg8&program=lxAQ7Zs9VYR&idScheme=Attribute:Gq0oWTf2DtN
&orgUnitIdScheme=UID&programStageIdScheme=Code
In addition to the above event query end point, there is an event grid query end point where a
more compact “grid” format of events are returned. This is possible by interacting with /api/
events/[Link]|xml|xls|csv endpoint.
/api/33/events/query
Most of the query parameters mentioned in event querying and reading section above are valid
here. However, since the grid to be returned comes with specific set of columns that apply to all
rows (events), it is mandatory to specify a program stage. It is not possible to mix events from
different programs or program stages in the return.
Returning events from a single program stage, also opens up for new functionality - for example
sorting and searching events based on their data element values. api/events/query has support
for this. Below are some examples
A query to return an event grid containing only selected data elements for a program stage
/api/33/events/[Link]?orgUnit=DiszpKrYNg8&programStage=Zj7UnCAulEk
&dataElement=qrur9Dvnyt5,fWIAEtYVEGk,K6uUAvq500H&order=lastUpdated:desc
&pageSize=50&page=1&totalPages=true
A query to return an event grid containing all data elements of a program stage
/api/33/events/[Link]?orgUnit=DiszpKrYNg8&programStage=Zj7UnCAulEk
&includeAllDataElements=true
/api/33/events/[Link]?orgUnit=DiszpKrYNg8&programStage=Zj7UnCAulEk
&filter=qrur9Dvnyt5:GT:20:LT:50
In addition to the filtering, the above example also illustrates one thing: the fact that there are no
data elements mentioned to be returned in the grid. When this happens, the system defaults
back to return only those data elements marked “Display in report” under program stage
configuration.
We can also extend the above query to return us a grid sorted (asc|desc) based on data element
value
318
1 Web API 1.71.3 Événements
/api/33/events/[Link]?orgUnit=DiszpKrYNg8&programStage=Zj7UnCAulEk
&filter=qrur9Dvnyt5:GT:20:LT:50&order=qrur9Dvnyt5:desc
To create, read, update and delete event filters you can interact with the /api/eventFilters
resource.
/api/33/eventFilters
For creating and updating an event filter in the system, you will be working with the eventFilters
resource. POST is used to create and PUT method is used to update. The event filter definitions
are used in the Tracker Capture app to display relevant predefined “Working lists” in the tracker
user interface.
Request Payload
319
1 Web API 1.71.3 Événements
320
1 Web API 1.71.3 Événements
321
1 Web API 1.71.3 Événements
The available assigned user selection modes are explained in the following table.
Mode Description
CURRENT Assigned to the current logged in user
PROVIDED Assigned to the users provided in the “assignedUser” parameter
NONE Assigned to no users.
ANY Assigned to any users.
"program":
"ur1Edk5Oe2n",
"description": "Simple
Filter for TB
events",
"name":
"TB
events",
"eventQueryCriteria":
{
"organisationUnit":
"DiszpKrYNg8",
"eventStatus":
"COMPLETED",
"eventDate":
{
"startDate":
"2014-05-01",
"endDate":
"2019-03-20",
"startBuffer":
-5,
"endBuffer":
5,
"period":
"LAST_WEEK",
"type":
"RELATIVE"
322
1 Web API 1.71.3 Événements
},
"dataFilters":
[
{
"dataItem":
"abcDataElementUid",
"le":
"20",
"ge":
"10",
"lt":
"20",
"gt":
"10",
"in":
["India",
"Norway"],
"like":
"abc"
},
{
"dataItem":
"dateDataElementUid",
"dateFilter":
{
"startDate":
"2014-05-01",
"endDate":
"2019-03-20",
"type":
"ABSOLUTE"
}
},
{
"dataItem":
"anotherDateDataElementUid",
"dateFilter":
{
"startBuffer":
-5,
"endBuffer":
5,
"type":
"RELATIVE"
}
},
{
"dataItem":
"yetAnotherDateDataElementUid",
"dateFilter":
{
"period":
"LAST_WEEK",
"type":
"RELATIVE"
}
}
323
1 Web API 1.71.4 Relationships
],
"programStatus":
"ACTIVE"
}
}
GET /api/33/eventFilters/{uid}
GET /api/33/eventFilters?fields=*
All event filters for a specific program can be retrieved by using the following api
GET /api/33/eventFilters?filter=program:eq:IpHINAT79UW
DELETE /api/33/eventFilters/{uid}
1.71.4 Relationships
Relationships are links between two entities in tracker. These entities can be tracked entity
instances, enrollments and events.
There are multiple endpoints that allow you to see, create, delete and update relationships. The
most common is the /api/trackedEntityInstances endpoint, where you can include relationships
in the payload to create, update or deleting them if you omit them - Similar to how you work with
enrollments and events in the same endpoint. All the tracker endpoints, /api/
trackedEntityInstances, /api/enrollments and /api/events also list their relationships if requested
in the field filter.
The standard endpoint for relationships is, however, /api/relationships. This endpoint provides all
the normal CRUD operations for relationships.
List all relationships require you to provide the UID of the trackedEntityInstance, Enrollment or
event that you want to list all the relationships for:
GET /api/relationships?tei=ABCDEF12345
GET /api/relationships?enrollment=ABCDEF12345
GET /api/relationships?event=ABCDEF12345
This request will return a list of any relationship you have access to see that includes the
trackedEntityInstance, enrollment or event you specified. Each relationship is represented with
the following JSON:
324
1 Web API 1.71.4 Relationships
"relationshipType":
"dDrh5UyCyvQ",
"relationshipName":
"Mother-
Child",
"relationship":
"t0HIBrc65Rm",
"bidirectional":
false,
"from":
{
"trackedEntityInstance":
{
"trackedEntityInstance":
"vOxUH373fy5"
}
},
"to":
{
"trackedEntityInstance":
{
"trackedEntityInstance":
"pybd813kIWx"
}
},
"created":
"2019-04-26T[Link].267",
"lastUpdated":
"2019-04-26T[Link].267"
}
You can also view specified relationships using the following endpoint:
GET /api/relationships/<id>
POST /api/relationships
PUT /api/relationships
"relationshipType":
"dDrh5UyCyvQ",
"from":
{
"trackedEntityInstance":
{
"trackedEntityInstance":
"vOxUH373fy5"
}
325
1 Web API 1.71.5 Update strategies
},
"to":
{
"trackedEntityInstance":
{
"trackedEntityInstance":
"pybd813kIWx"
}
}
}
DELETE /api/relationships/<id>
"enrollment":
{
"enrollment":
"<id>"
}
}
"event":
{
"event":
"<id>"
}
}
Two update strategies for all 3 tracker endpoints are supported: enrollment and event creation.
This is useful when you have generated an identifier on the client side and are not sure if it was
created or not on the server.
Parameter Description
CREATE Create only, this is the default behavior.
CREATE_AND_UPDATE Try and match the ID, if it exist then update, if not create.
POST /api/33/trackedEntityInstances?strategy=CREATE_AND_UPDATE
326
1 Web API 1.71.6 Tracker bulk deletion
Bulk deletion of tracker objects work in a similar fashion to adding and updating tracker objects,
the only difference is that the importStrategy is DELETE.
"trackedEntityInstances":
[
{
"trackedEntityInstance":
"ID1"
},
{
"trackedEntityInstance":
"ID2"
},
{
"trackedEntityInstance":
"ID3"
}
]
}
"enrollments":
[
{
"enrollment":
"ID1"
},
{
"enrollment":
"ID2"
},
{
"enrollment":
"ID3"
}
]
}
327
1 Web API 1.71.7 Identifier reuse and item deletion via POST and PUT methods
"events":
[
{
"event":
"ID1"
},
{
"event":
"ID2"
},
{
"event":
"ID3"
}
]
}
1.71.7 Identifier reuse and item deletion via POST and PUT methods
The system does not allow to delete an item via an update (PUT) or create (POST) method.
Therefore, an attribute deleted is ignored in both PUT and POST methods, and in POST method it
is by default set to false.
Import parameters
328
1 Web API 1.71.8 Import parameters
In addition to XML and JSON for event import/export, in DHIS2.17 we introduced support for the
CSV format. Support for this format builds on what was described in the last section, so here we
will only write about what the CSV specific parts are.
To use the CSV format you must either use the /api/[Link] endpoint, or add content-
type: text/csv for import, and accept: text/csv for export when using the /api/events endpoint.
The order of column in the CSV which are used for both export and import is as follows:
CSV column
329
1 Web API 1.71.9 Tracker Ownership Management
EJNxP3WreNP,COMPLETED,<pid>,<psid>,<enrollment-id>,<ou>,2016-01-01,2016-01-01,,,<de>,1,,
EJNxP3WreNP,COMPLETED,<pid>,<psid>,<enrollment-id>,<ou>,2016-01-01,2016-01-01,,,<de>,2,,
qPEdI1xn7k0,COMPLETED,<pid>,<psid>,<enrollment-id>,<ou>,2016-01-01,2016-01-01,,,<de>,3,,
qPEdI1xn7k0,COMPLETED,<pid>,<psid>,<enrollment-id>,<ou>,2016-01-01,2016-01-01,,,<de>,4,,
The import strategy SYNC should be used only by internal synchronization task and not for
regular import. The SYNC strategy allows all 3 operations: CREATE, UPDATE, DELETE to be present
in the payload at the same time.
A new concept called Tracker Ownership is introduced from 2.30. There will now be one owner
organisation unit for a tracked entity instance in the context of a program. Programs that are
configured with an access level of PROTECTED or CLOSED will adhere to the ownership privileges.
Only those users belonging to the owning org unit for a tracked entity-program combination will
be able to access the data related to that program for that tracked entity.
It is possible to temporarily override this ownership privilege for a program that is configured
with an access level of PROTECTED. Any user will be able to temporarily gain access to the
program related data, if the user specifies a reason for accessing the tracked entity-program
data. This act of temporarily gaining access is termed as breaking the glass. Currently, the
temporary access is granted for 3 hours. DHIS2 audits breaking the glass along with the reason
specified by the user. It is not possible to gain temporary access to a program that has been
configured with an access level of CLOSED. To break the glass for a tracked entity program
combination, you can issue a POST request as shown:
/api/33/tracker/ownership/override?trackedEntityInstance=DiszpKrYNg8
&program=eBAyeGv0exc&reason=patient+showed+up+for+emergency+care
It is possible to transfer the ownership of a tracked entity-program from one org unit to another.
This will be useful in case of patient referrals or migrations. Only an owner (or users who have
broken the glass) can transfer the ownership. To transfer ownership of a tracked entity-program
to another organisation unit, you can issue a PUT request as shown:
/api/33/tracker/ownership/transfer?trackedEntityInstance=DiszpKrYNg8
&program=eBAyeGv0exc&ou=EJNxP3WreNP
330
1 Web API 1.72 Potential Duplicates
Potential duplicates are records we work with in the data deduplication feature. Due to the
nature of the deduplication feature, this API endpoint is somewhat restricted.
A potential duplicate represents a single or pair of records which are suspected to be a duplicate.
"teiA":
"<id>",
"teiB":
"<id>",
"status":
"OPEN|
INVALID|
MERGED"
}
You can retrieve a list of potential duplicates using the following endpoint:
GET /api/potentialDuplicates
GET /api/potentialDuplicates/<id>
POST /api/potentialDuplicates
The payload you provide needs at least teiA to be a valid tracked entity instance; teiB is optional.
If teiB is set, it also needs to point to an existing tracked entity instance.
"teiA":
"<id>",
"teiB":
"<id>"
}
You can mark a potential duplicate as invalid to tell the system that the potential duplicate has
been investigated and deemed to be not a duplicate. To do so you can use the following
endpoint:
PUT /api/potentialDuplicates/<id>/invalidation
331
1 Web API 1.73 Adresses électronique
DELETE /api/potentialDuplicates/<id>
The Web API features a resource for sending emails. For emails to be sent it is required that the
SMTP configuration has been properly set up and that a system notification email address for the
DHIS2 instance has been defined. You can set SMTP settings from the email settings screen and
system notification email address from the general settings screen in DHIS2.
/api/33/email
The notification resource lets you send system email notifications with a given subject and text in
JSON or XML. The email will be sent to the notification email address as defined in the DHIS2
general system settings:
{
"subject":
"Integrity
check
summary",
"text": "All checks
ran
successfully"
}
You can send a system email notification by posting to the notification resource like this:
You can also send a general email notification by posting to the notification resource as
mentioned below. F_SEND_EMAIL or ALL authority has to be in the system to make use of this
api. Subject parameter is optional. “DHIS 2” string will be sent as default subject if it is not
provided in url. Url should be encoded in order to use this API.
curl "localhost/api/33/email/notification?
recipients=xyz%[Link]&message=sample%20email&subject=Test%20Email"
-X POST -u
admin:district
To test whether the SMTP setup is correct by sending a test email to yourself you can interact
with the test resource. To send test emails it is required that your DHIS2 user account has a valid
email address associated with it. You can send a test email like this:
332
1 Web API 1.74 Partage
1.74 Partage
The sharing solution allows you to share most objects in the system with specific user groups and
to define whether objects should be publicly accessible or private. To get and set sharing status
for objects you can interact with the sharing resource.
/api/33/sharing
To request the sharing status for an object use a GET request to:
/api/33/sharing?type=dataElement&id=fbfJHSPpUQD
"meta":
{
"allowPublicAccess":
true,
"allowExternalAccess":
false
},
"object":
{
"id":
"fbfJHSPpUQD",
"name":
"ANC
1st
visit",
"publicAccess":
"rw------",
"externalAccess":
false,
"user":
{},
"userGroupAccesses":
[
{
"id":
"hj0nnsVsPLU",
"access":
"rw------"
},
{
"id":
"qMjBflJMOfB",
"access":
"r-------"
}
]
}
}
333
1 Web API 1.74.2 Set sharing status
You can define the sharing status for an object using the same URL with a POST request, where
the payload in JSON format looks like this:
"object":
{
"publicAccess":
"rw------",
"externalAccess":
false,
"user":
{},
"userGroupAccesses":
[
{
"id":
"hj0nnsVsPLU",
"access":
"rw------"
},
{
"id":
"qMjBflJMOfB",
"access":
"r-------"
}
]
}
}
In this example, the payload defines the object to have read-write public access, no external
access (without login), read-write access to one user group and read-only access to another user
group. You can submit this to the sharing resource using curl:
1.75 Programmation
DHIS2 allows for scheduling of jobs of various types. Each type of job has different properties for
configuration, giving you finer control over how jobs are run. In addition, you can configure the
same job to run with different configurations and at different intervals if required.
Main properties
334
1 Web API 1.75 Programmation
• skipResourceTables: Skip
generation of resource
tables
335
1 Web API 1.75 Programmation
Possible values:
DATA_VALUE,
COMPLETENESS,
COMPLETENESS_TARGET,
ORG_UNIT_TARGET,
EVENT, ENROLLMENT,
VALIDATION_RESULT
• skipResourceTables: Skip
generation of resource
tables
DATA_SYNC NONE
META_DATA_SYNC NONE
SEND_SCHEDULED_MESSAGE NONE
PROGRAM_NOTIFICATIONS NONE
MONITORING (Validation rule relativeStart: A number relativeStart (int:0)
analysis) • related to date of •
execution which • relativeEnd (int:0)
resembles the start of
• validationRuleGroups
the period to monitor
(Array of String
• relativeEnd: A number (UIDs):None )
related to date of
• sendNotification
execution which
(Boolean:false)
resembles the end of the
period to monitor • persistsResults
(Boolean:false)
• validationRuleGroups:
Validation rule
groups(UIDs) to include
in job
• sendNotification: Set
“true” if job should send
notifications based on
validation rule groups
• persistsResults: Set
“true” if job should
persist validation results
336
1 Web API 1.75.1 Get available job types
• predictors:
Predictors(UIDs) to
include in job
To get a list of all available job types you can use the following endpoint:
GET /api/jobConfigurations/jobTypes
The response contains information about each job type including name, job type, key, scheduling
type and available parameters. The scheduling type can either be CRON, meaning jobs can be
scheduled using a cron expression with the cronExpression field, or FIXED_DELAY, meaning
jobs can be scheduled to run with a fixed delay in between with the delay field. The field delay is
given in seconds.
"jobTypes":
[
{
"name":
"Data
integrity",
"jobType":
"DATA_INTEGRITY",
"key":
"dataIntegrityJob",
"schedulingType":
"CRON"
},
{
"name":
"Resource
table",
337
1 Web API 1.75.2 Create job
"jobType":
"RESOURCE_TABLE",
"key":
"resourceTableJob",
"schedulingType":
"CRON"
},
{
"name": "Continuous
analytics table",
"jobType":
"CONTINUOUS_ANALYTICS_TABLE",
"key":
"continuousAnalyticsTableJob",
"schedulingType":
"FIXED_DELAY"
}
]
}
/api/jobConfigurations
"name":
"",
"jobType":
"JOBTYPE",
"cronExpression":
"0 *
* ? *
*"
}
{
"name": "Analytics
tables last two
years",
"jobType":
"ANALYTICS_TABLE",
"cronExpression":
"0 *
* ? *
*",
"jobParameters":
{
"lastYears":
"2",
"skipTableTypes":
[],
338
1 Web API 1.75.3 Get jobs
"skipResourceTables":
false
}
}
{
"name":
"Push
anlysis
charts",
"jobType":
"PUSH_ANALYSIS",
"cronExpression":
"0 *
* ? *
*",
"jobParameters":
{
"pushAnalysis":
["jtcMAKhWwnc"]
}
}
An example of a job with scheduling type FIXED_DELAY and 120 seconds delay:
{
"name": "Continuous
analytics
table",
"jobType":
"CONTINUOUS_ANALYTICS_TABLE",
"delay":
"120",
"jobParameters":
{
"fullUpdateHourOfDay":
4
}
}
GET /api/jobConfigurations
Retrieve a job:
GET /api/jobConfigurations/{id}
339
1 Web API 1.75.3 Get jobs
{
"lastUpdated":
"2018-02-22T[Link].067",
"id":
"KBcP6Qw37gT",
"href": "[Link]
jobConfigurations/KBcP6Qw37gT",
"created":
"2018-02-22T[Link].067",
"name":
"analytics
last two
years",
"jobStatus":
"SCHEDULED",
"displayName":
"analytics last
two years",
"enabled":
true,
"externalAccess":
false,
"jobType":
"ANALYTICS_TABLE",
"nextExecutionTime":
"2018-02-26T[Link].000",
"cronExpression":
"0 0 3 ?
* MON",
"jobParameters":
{
"lastYears":
2,
"skipTableTypes":
[],
"skipResourceTables":
false
},
"favorite":
false,
"configurable":
true,
"access":
{
"read":
true,
"update":
true,
"externalize":
true,
"delete":
true,
"write":
true,
"manage":
true
},
340
1 Web API 1.75.4 Update job
"lastUpdatedBy":
{
"id":
"GOLswS44mh8"
},
"favorites":
[],
"translations":
[],
"userGroupAccesses":
[],
"attributeValues":
[],
"userAccesses":
[]
}
Update a job with parameters using the following endpoint and JSON payload format:
PUT /api/jobConfiguration/{id}
{
"name":
"analytics
last two
years",
"enabled":
true,
"cronExpression":
"0 0 3 ?
* MON",
"jobType":
"ANALYTICS_TABLE",
"jobParameters":
{
"lastYears":
"3",
"skipTableTypes":
[],
"skipResourceTables":
false
}
}
DELETE /api/jobConfiguration/{id}
341
1 Web API 1.76 Schema
Note that some jobs with custom configuration parameters may not be added if the required
system settings are not configured. An example of this is data synchronization, which requires
remote server configuration.
1.76 Schema
A resource which can be used to introspect all available DXF 2 objects can be found on /api/
schemas. For specific resources you can have a look at /api/schemas/<type>.
GET /api/[Link]
GET /api/[Link]
GET /api/schemas/[Link]
1.77 UI customization
To customize the UI of the DHIS2 application you can insert custom JavaScript and CSS styles
through the files resource. The JavaScript and CSS content inserted through this resource will be
loaded by the DHIS2 web application. This can be particularly useful in certain situations:
• Overriding the CSS styles of the DHIS2 application, such as the login page or main page.
• Defining JavaScript functions which are common to several custom data entry forms and
HTML-based reports.
• Including CSS styles which are used in custom data entry forms and HTML-based reports.
1.77.1 Javascript
To insert Javascript from a file called [Link] you can interact with the files/script resource with a
POST-request:
Note that we use the --data-binary option to preserve formatting of the file content. You can
fetch the JavaScript content with a GET request:
/api/33/files/script
342
1 Web API 1.77.2 CSS
1.77.2 CSS
To insert CSS from a file called [Link] you can interact with the files/style resource with a POST-
request:
/api/33/files/style
1.78 Synchronization
To initiate a data value push to a remote server one must first configure the URL and credentials
for the relevant server from System settings > Synchronization, then make a POST request to the
following resource:
/api/33/synchronization/dataPush
To initiate a metadata pull from a remote JSON document you can make a POST request with a
url as request payload to the following resource:
/api/33/synchronization/metadataPull
To check the availability of the remote data server and verify user credentials you can make a
GET request to the following resource:
/api/33/synchronization/availability
1.79 Applications
The /api/apps endpoint can be used for installing, deleting and listing apps. The app key is
based on the app name, but with all non-alphanumerical characters removed, and spaces
replaced with a dash. My app! will return the key My-app.
343
1 Web API 1.79.1 Get apps
Note
Previous to 2.28, the app key was derived from the name of the ZIP
archive, excluding the file extension. URLs using the old format should
still return the correct app in the api.
/api/33/apps
Note
Previous to 2.28 the app property folderName referred to the actual
path of the installed app. With the ability to store apps on cloud
services, folderName’s purpose changed, and will now refer to the app
key.
You can read the keys for apps by listing all apps from the apps resource and look for the key
property. To list all installed apps in JSON:
You can also simply point your web browser to the resource URL:
[Link]
The apps list can also be filtered by app type and by name, by appending one or more filter
parameters to the URL:
[Link]
App names support the eq and ilike filter operators, while appType supports eq only.
344
1 Web API 1.79.4 Reload apps
To force a reload of currently installed apps, you can issue the following command. This is useful
if you added a file manually directly to the file system, instead of uploading through the DHIS2
user interface.
If the DHIS2 instance has been configured to use cloud storage, apps will now be installed and
stored on the cloud service. This will enable multiple instances share the same versions on
installed apps, instead of installing the same apps on each individual instance.
Note
Previous to 2.28, installed apps would only be stored on the instance’s
local filesystem. Apps installed before 2.28 will still be available on the
instance it was installed, but it will not be shared with other instances,
as it’s still located on the instances local filesystem.
The Web API exposes the content of the DHIS2 App Store as a JSON representation which can
found at the /api/appStore resource.
/api/33/appStore
GET /api/33/appStore
{
[
{
"name": "Tabular
Tracker
Capture",
"description": "Tabular Tracker Capture is an app that makes
you more effective.",
"sourceUrl": "[Link]
dhis2/App-repository",
"appType":
"DASHBOARD_WIDGET",
"status":
"PENDING",
"id":
"NSD06BVoV21",
"developer":
{
345
1 Web API 1.80.2 Install apps
"name":
"DHIS",
"organisation":
"Uio",
"address":
"Oslo",
"email":
"dhis@[Link]",
},
"versions":
[
{
"id":
"upAPqrVgwK6",
"version":
"1.2",
"minDhisVersion":
"2.17",
"maxDhisVersion":
"2.20",
"downloadUrl": "[Link]
[Link]",
"demoUrl": "http://
[Link]/demo"
}
],
"images":
[
{
"id":
"upAPqrVgwK6",
"logo":
"true",
"imageUrl": "[Link]
[Link]",
"description":
"added feature
snapshot",
"caption":
"dialog",
}
]
}
]
}
You can install apps on your instance of DHIS2 assuming you have the appropriate permissions.
An app is referred to using the id property of the relevant version of the app. An app is installed
with a POST request with the version id to the following resource:
POST /api/33/appStore/{app-version-id}
346
1 Web API 1.81 Data store
Using the dataStore resource, developers can store arbitrary data for their apps. Access to a
datastore’s key is based on its sharing settings. By default all keys created are publicly accessible
(read and write). Additionally, access to a datastore’s namespace is limited to the user’s access to
the corresponding app, if the app has reserved the namespace. For example a user with access
to the “sampleApp” application will also be able to use the sampleApp namespace in the
datastore. If a namespace is not reserved, no specific access is required to use it.
/api/33/dataStore
Data store entries consist of a namespace, key and value. The combination of namespace and
key is unique. The value data type is JSON.
GET /api/33/dataStore
curl "[Link]/demo/api/33/dataStore" -u
admin:district
Example response:
["foo",
"bar"]
GET /api/33/dataStore/<namespace>
347
1 Web API 1.81.3 Create values
curl "[Link]/demo/api/33/dataStore/foo" -u
admin:district
Example response:
["key_1",
"key_2"]
GET /api/33/dataStore/<namespace>/<key>
curl "[Link]/demo/api/33/dataStore/foo/key_1"-u
admin:district
Example response:
{
"foo":
"bar"
}
GET /api/33/dataStore/<namespace>/<key>/metaData
curl "[Link]/demo/api/33/dataStore/foo/key_1/metaData" -u
admin:district
Example response:
"created":
"...",
"user":
{...},
"namespace":
"foo",
"key":
"key_1"
}
POST /api/33/dataStore/<namespace>/<key>
348
1 Web API 1.81.4 Update values
curl "[Link]
key_1" -X POST
-H "Content-Type: application/json" -d
"{\"foo\":\"bar\"}" -u admin:district
Example response:
"httpStatus":
"OK",
"httpStatusCode":
201,
"status":
"OK",
"message":
"Key
'key_1'
created."
}
If you require the data you store to be encrypted (for example user credentials or similar) you
can append a query to the url like this:
GET /api/33/dataStore/<namespace>/<key>?encrypt=true
PUT /api/33/dataStore/<namespace>/<key>
curl "[Link]
-X PUT -d "[1, 2, 3]"
-H "Content-Type: application/json" -u
admin:district
Example response:
"httpStatus":
"OK",
"httpStatusCode":
200,
"status":
"OK",
"message":
"Key
'key_1'
updated."
}
349
1 Web API 1.81.5 Delete keys
DELETE /api/33/dataStore/<namespace>/<key>
Example response:
"httpStatus":
"OK",
"httpStatusCode":
200,
"status":
"OK",
"message": "Key 'key_1' deleted
from namespace 'foo'."
}
DELETE /api/33/dataStore/<namespace>
Example response:
"httpStatus":
"OK",
"httpStatusCode":
200,
"status":
"OK",
"message":
"Namespace
'foo'
deleted."
}
Sharing of datastore keys follows the same principle as for other metadata sharing (see Sharing).
350
1 Web API 1.82 User data store
GET /api/33/sharing?type=dataStore&id=<uid>
POST /api/33/sharing?type=dataStore&id=<uid>
"object":
{
"publicAccess":
"rw------",
"externalAccess":
false,
"user":
{},
"userAccesses":
[],
"userGroupAccesses":
[
{
"id":
"hj0nnsVsPLU",
"access":
"rw------"
},
{
"id":
"qMjBflJMOfB",
"access":
"r-------"
}
]
}
}
In addition to the dataStore which is shared between all users of the system, a user-based data
store is also available. Data stored to the userDataStore is associated with individual users, so
that each user can have different data on the same namespace and key combination. All calls
against the userDataStore will be associated with the logged in user. This means one can only
see, change, remove and add values associated with the currently logged in user.
/api/33/userDataStore
userDataStore consists of a user, a namespace, keys and associated values. The combination of
user, namespace and key is unique.
351
1 Web API 1.82.2 Get namespaces
GET /api/33/userDataStore
Example request:
["foo",
"bar"]
GET /api/userDataStore/<namespace>
Example request:
["key_1",
"key_2"]
GET /api/33/userDataStore/<namespace>/<key>
Example request:
352
1 Web API 1.82.5 Create value
"some":
"value"
}
POST /api/33/userDataStore/<namespace>/<key>
Example request:
"httpStatus":
"Created",
"httpStatusCode":
201,
"status":
"OK",
"message": "Key 'bar' in
namespace 'foo' created."
}
If you require the value to be encrypted (For example user credentials and such) you can append
a query to the url like this:
GET /api/33/userDataStore/<namespace>/<key>?encrypt=true
PUT /api/33/userDataStore/<namespace>/<key>
Example request:
353
1 Web API 1.82.7 Delete key
"httpStatus":
"Created",
"httpStatusCode":
201,
"status":
"OK",
"message": "Key 'bar' in
namespace 'foo' updated."
}
Delete a key
DELETE /api/33/userDataStore/<namespace>/<key>
Example request:
"httpStatus":
"OK",
"httpStatusCode":
200,
"status":
"OK",
"message": "Key 'bar' deleted from
the namespace 'foo."
}
DELETE /api/33/userDataStore/<namespace>
Example request:
"httpStatus":
"OK",
"httpStatusCode":
200,
"status":
"OK",
354
1 Web API 1.83 Les prédicteurs
"message": "All keys from
namespace 'foo' deleted."
}
A predictor allows you to generate data values based on an expression. This can be used to
generate targets, thresholds and estimated values. You can interact with predictors through the /
api/33/predictors resource.
/api/33/predictors
You can create a predictor with a POST request to the predictors resource:
POST /api/33/predictors
{
"id":
"AG10KUJCrRk",
"name": "Malaria Outbreak
Threshold Predictor",
"shortName": "Malaria
Outbreak
Predictor",
"description": "Computes the threshold for potential malaria outbreaks based on the mean
plus 1.5x the std dev",
"output":
{
"id":
"nXJJZNVAy0Y"
},
"generator":
{
"expression": "AVG(#{r6nrJANOqMw})
+1.5*STDDEV(#{r6nrJANOqMw})",
"dataElements":
[],
"sampleElements":
[
{
"id":
"r6nrJANOqMw"
}
]
},
"periodType":
"Monthly",
"sequentialSampleCount":
4,
"sequentialSkipCount":
1,
"annualSampleCount":
3,
355
1 Web API 1.83.2 Generating predicted values
"organisationUnitLevels":
[4]
}
The output element refers to the identifier of the data element for which to saved predicted data
values. The generator element refers to the expression to use when calculating the predicted
values.
To run all predictors (generating predicted values) you can make a POST request to the run
resource:
POST /api/33/predictors/run
To run a single predictor you can make a POST request to the run resource for a predictor:
POST /api/33/predictors/AG10KUJCrRk/run
The min-max data elements resource allows you to set minimum and maximum value ranges for
data elements. It is unique by the combination of organisation unit, data element and category
option combo.
/api/minMaxDataElements
You can retrieve a list of all min-max data elements from the following resource:
GET /api/[Link]
356
1 Web API 1.84.1 Add/Update min-max data element
GET /api/[Link]?filter=[Link]:eq:UOlfIjgN8X6
GET /api/[Link]?filter=[Link]:in:[UOlfIjgN8X6,xc8gmAKfO95]
The filter parameter for min-max data elements supports two operators: eq and in. You can also
use the fields query parameter.
GET /api/[Link]?fields=:all,dataElement[id,name]
POST /api/[Link]
{
"min":
1,
"generated":
false,
"max":
100,
"dataElement":
{
"id":
"UOlfIjgN8X6"
},
"source":
{
"id":
"DiszpKrYNg8"
},
"optionCombo":
{
"id":
"psbwp3CQEhs"
}
}
If the combination of data element, organisation unit and category option combo exists, the min-
max value will be updated.
DELETE /api/[Link]
357
1 Web API 1.85 Exceptions de verrouillage
{
"min":
1,
"generated":
false,
"max":
100,
"dataElement":
{
"id":
"UOlfIjgN8X6"
},
"source":
{
"id":
"DiszpKrYNg8"
},
"optionCombo":
{
"id":
"psbwp3CQEhs"
}
}
The lock exceptions resource allows you to open otherwise locked data sets for data entry for a
specific data set, period and organisation unit. You can read lock exceptions from the following
resource:
/api/lockExceptions
To create a new lock exception you can use a POST request and specify the data set, period and
organisation unit:
POST /api/lockExceptions?ds=BfMAe6Itzgt&pe=201709&ou=DiszpKrYNg8
To delete a lock exception you can use a similar request syntax with a DELETE request:
DELETE /api/lockExceptions?ds=BfMAe6Itzgt&pe=201709&ou=DiszpKrYNg8
1.86 Tokens
You can retrieve a Google service account OAuth 2.0 access token with a GET request to the
following resource.
GET /api/tokens/google
358
1 Web API 1.87 Analytics table hooks
The token will be valid for a certain amount of time, after which another token must be
requested from this resource. The response contains a cache control header which matches the
token expiration. The response will contain the following properties in JSON format.
Token response
Property Description
access_token The OAuth 2.0 access token to be used when
authentication against Google services.
expires_in The number of seconds until the access token
expires, typically 3600 seconds (1 hour).
client_id The Google service account client id.
This assumes that a Google service account has been set up and configured for DHIS2. Please
consult the installation guide for more info.
Analytics table hooks provide a mechanism for invoking SQL scripts during different phases of
the analytics table generation process. This is useful for customizing data in resource and
analytics tables, e.g. in order to achieve specific logic for calculations and aggregation. Analytics
table hooks can be manipulated at the following API endpoint:
/api/analyticsTableHooks
The analytics table hooks API supports the standard HTTP CRUD operations for creating (POST),
updating (PUT), retrieving (GET) and deleting (DELETE) entities.
359
1 Web API 1.87.2 Creating hooks
The ANALYTICS_TABLE_POPULATED phase takes place after the analytics table has been
populated, but before indexes have been created and the temp table has been swapped with the
main table. As a result, the SQL script should refer to the analytics temp table, e.g.
analytics_temp, analytics_completeness_temp.
This applies also to the RESOURCE_TABLE_POPULATED phase, which takes place after the
resource table has been populated, but before indexes have been created and the temp table
has been swapped with the main table. As a result, the SQL script should refer to the resource
temp table, e.g. _orgunitstructure_temp, _categorystructure_temp.
You should define only one of the resourceTableType and analyticsTableType fields, depending
on which phase is defined.
You can refer to the temporary database table which matches the specified hook table type only
(other temporary tables will not be available). As an example, if you specify
ORG_UNIT_STRUCTURE as the resource table type, you can refer to the _orgunitstructure_temp
temporary database table only.
The following table shows the valid combinations of phases, table types and temporary tables.
To create a hook which should run after the resource tables have been populated you can do a
POST request like this using JSON format:
360
1 Web API 1.88 Metadata repository
{
"name": "Update 'Area' in org unit group
set resource table",
"phase":
"RESOURCE_TABLE_POPULATED",
"resourceTableType":
"ORG_UNIT_GROUP_SET_STRUCTURE",
"sql": "update _organisationunitgroupsetstructure_temp
set \"uIuxlbV1vRT\" = 'b0EsAxm8Nge'"
}
To create a hook which should run after the data value analytics table has been populated you
can do a POST request like this using JSON format:
{
"name": "Update 'Currently on treatment' data in
analytics table",
"phase":
"ANALYTICS_TABLE_POPULATED",
"analyticsTableType":
"DATA_VALUE",
"sql": "update analytics_temp set monthly = '200212' where
\"monthly\" in ('200210', '200211')"
}
DHIS2 provides a metadata repository containing metadata packages with various content. A
metadata package is a DHIS2-compliant JSON document which describes a set of metadata
objects.
To retrieve an index over available metadata packages you can issue a GET request to the
metadataRepo resource:
GET /api/synchronization/metadataRepo
A metadata package entry contains information about the package and a URL to the relevant
package. An index could look like this:
"packages":
[
{
"id":
"sierre-
leone-
demo",
"name":
"Sierra
Leone
demo",
"description": "Sierra
Leone demo database",
"version":
"0.1",
"href": "[Link]
demo/[Link]"
361
1 Web API 1.89 Icons
},
{
"id":
"trainingland-
org-units",
"name": "Trainingland
organisation units",
"description": "Trainingland organisation
units with four levels",
"version":
"0.1",
"href": "[Link]
units/[Link]"
}
]
}
A client can follow the URLs and install a metadata package through a POST request with content
type text/plain with the metadata package URL as the payload to the metadataPull resource:
POST /api/synchronization/metadataPull
curl "localhost:8080/api/synchronization/
metadataPull" -X POST
-d "[Link]
[Link]"
-H "Content-Type:text/plain" -u
admin:district
1.89 Icons
DHIS2 includes a collection of icons that can be used to give visual context to metadata. These
icons can be accessed through the icons resource.
GET /api/icons
This endpoint returns a list of information about the available icons. Each entry contains
information about the icon, and a reference to the actual icon.
{
"key":
"mosquito_outline",
"description":
"Mosquito
outline",
"keywords":
["malaria",
"mosquito",
"dengue"],
"href": "<dhis server>/api/icons/
mosquito_outline/[Link]"
}
The keywords can be used to filter which icons to return. Passing a list of keywords with the
request will only return icons that match all the keywords:
362
1 Web API 1.89 Icons
GET /api/icons?keywords=shape,small
GET /api/icons/keywords
363
2 Applications 2.1 Purpose of packaged Apps
2 Applications
A packaged app is an Open Web App that has all of its resources (HTML, CSS, JavaScript, app
manifest, and so on) contained in a zip file. It can be uploaded to a DHIS2 installation directly
through the user interface at runtime. A packaged app is a ZIP file with an app manifest in its root
directory. The manifest must be named [Link]. A throrough description of apps can
be obtained here.
The purpose of packaged apps is to extend the web interface of DHIS2, without the need to
modify the source code of DHIS2 itself. A system deployment will often have custom and unique
requirements. The apps provide a convenient extension point to the user interface. Through
apps, you can complement and customize the DHIS2 core functionality with custom solutions in a
loosely coupled and clean manner.
Apps do not have permissions to interact directly with DHIS2 Java API. Instead, apps are expected
to use functionality and interact with the DHIS2 services and data by utilizing the DHIS2 Web API.
DHIS2 apps are constructed with HTML, JavaScript and CSS files, similar to any other web
application. Apps also need a special file called [Link] which describes the contents of
the app. A basic example of the [Link] is shown below:
{
"version": "0.1",
"name": "My App",
"description": "My App is a Packaged App",
"launch_path": "/[Link]",
"appType": "APP",
"icons": {
"16": "/img/icons/[Link]",
"48": "/img/icons/[Link]",
"128": "/img/icons/[Link]"
},
"developer": {
"name": "Me",
"url": "[Link]
},
"default_locale": "en",
"activities": {
"dhis": {
"href": "*",
"namespace": "my-namespace"
}
},
"authorities": [
"MY_APP_ADD_NEW",
"MY_APP_UPDATE",
"MY_APP_DELETE"
}
}
The [Link] file must be located at the root of the project. Among the properties are:
• The icons→48 property is used for the icon that is displayed on the list of apps that are
installed on a DHIS2 instance.
364
2 Applications 2.2 Creating Apps
• The authorities property contains a list of DHIS2 authorities which can be used to restrict
users from certain actions on the current app. This list will be loaded into DHIS2 during
app installation process and available for selecting in User Role management form.
• The *** value for href is converted to the appropriate URL when the app is uploaded and
installed in DHIS2. This value can then be used by the application’s JavaScript and HTML
files to make calls to the DHIS2 Web API and identify the correct location of DHIS2 server
on which the app has been installed. To clarify, the *activities* part will look similar to this
after the app has been installed:
"activities": {
"dhis": {
"href": "[Link]
"namespace": "my-namespace"
}
}
The namespace property can be added if your app is utilizing the dataStore or userDataStore api.
When adding the namespace property, only users with access to your app are allowed to make
changes to the namespace. A namespace can only be reserved in this way once. If another app
tries to reserve a namespace already in use, the installation of the other app will fail.
If you have a collection of apps that want to share the same namespace, but also wish to reserve
it, the users of the apps needs to have the authority to use the app that initially reserved the
namespace.
Note
Namespaces will not be created until atleast one key-value pair is
present in the namespace. Specifying a namespace in the manifest only
restricts the access and does not create any data in the namespace.
The appType property specifies how the app will be displayed by the DHIS2 instance. The
possible values for appType and their effects are explained in the following table.
App types
If no appType is specified in the manifest, the system will use “APP” by default.
365
2 Applications 2.3 Installing Apps into DHIS2
To read the JSON structure into JavaScript, you can use a regular AJAX request and parse the
JSON into an object. Most Javascript libraries provide some support, for instance with jQuery it
can be done like this:
The app can contain HTML, JavaScript, CSS, images and other files which may be required to
support it . The file structure could look something like this:
/
/[Link] #manifest file (mandatory)
/css/ #css stylesheets (optional)
/img/ #images (optional)
/js/ #javascripts (optional)
Note
It is only the [Link] file which must be placed in the root. It
is up the developer to organize CSS, images and JavaScript files inside
the app as needed.
All the files in the project should be compressed into a standard zip archive. Note that the
[Link] file must be located on the root of the zip archive (do not include a parent
directory in the archive). The zip archive can then be installed into DHIS2 as you will see in the
next section.
Apps can be installed by uploading zip file into the App Manager. In, Services → Apps, click on the
App Store menu item.
The app can be uploaded by pressing the Browse button and after selecting the zip package, the
file is uploaded automatically and installed in DHIS2. You can also browse through apps in the
DHIS2 app store and download apps from there. The DHIS2 app store allows for app searching,
reviewing, commenting, requesting features, rating on the apps by the community.
After installation, your apps will be integrated with the menu system and can be accessed under
services and from the module overview page. It can also be accessed from the home page of the
apps module. Click on an app in the list in order to launch it.
366
3 Development Infrastructure 3.1 Release process
3 Development Infrastructure
3.1 Release process
1. In order to tag the source source code with new release. First temporarily add a
dependency to dhis-web in the root [Link]:
<module>dhis-web</module>
mvn versions:set
mvn versions:commit
This will prompt you to enter the version. Include [Link]. Remove the dhis-web
dependency. Update application cache manifests in the various apps to new version.
Commit and push the changes to master.
4. Tag source code with SNAPSHOT release. Include [Link]. Remember to add the
temporary dhis-web dependency in the root [Link]. Commit and push the changes to
master.
5. Create Jenkins build for the release WAR file. Include automatic copy job to [Link].
Include automatic WAR update of [Link]/demo (add release directory on server).
367
3 Development Infrastructure 3.1 Release process
/home/dhis/dhis-live-package
Replace the uncompressed WAR file with the new release. Make a compressed Live archive
and move to /download/live directory.
10. Update the version info on the play server home page at [Link] The index
file is found on the play server at:
/usr/share/nginx/html/[Link]
11. Update download page at [Link]/downloads with links to new Live package, WAR
file, source code branch page and sample data including version.
368
4 DHIS2 and R integration 4.1 Introduction
R is freely available, open source statistical computing environment. R refers to both the
computer programming language, as well as the software which can be used to create and run R
scripts. There are numerous sources on the web which describe the extensive set of features of
R.
4.2 Installing R
If you are installing R on the same server as DHIS, you should consider using the Comprehensive
R Archive Network (CRAN) to get the latest distribution of R. All you need to do is to add the
following like to you /etc/apt/[Link] file.
You will need to replace <your R mirror> with one from the list available here. You will also need
to replace <your Ubuntu distribution> with the name of the distribution you are using.
Next, lets see if everything is working by simply invoking R from the command line.
>
369
4 DHIS2 and R integration 4.3 Using ODBC to retrieve data from DHIS2 into R
In this example, we will use a system-wide ODBC connector which will be used to retrieve data
from the DHIS2 database. There are some disadvantages with this approach, as ODBC is slower
than other methods and it does raise some security concerns by providing a system-wide
connector to all users. However, it is a convenient method to provide a connection to multiple
users. The use of the R package RODBC will be used in this case. Other alternatives would be the
use of the RPostgreSQL package, which can interface directly through the PostgreSQL driver
described in the next section.
Assuming you have already installed R from the procedure in the previous section. Invoke the
following command to add the required libraries for this example.
Next, we need to configure the ODBC connection. Edit the file to suit your local situation using
the following template as a guide. Lets create and edit a file called [Link]
[dhis2]
Description = DHIS2 Database
Driver = /usr/lib/odbc/[Link]
Trace = No
TraceFile = /tmp/[Link]
Database = dhis2
Servername = [Link]
UserName = postgres
Password = SomethingSecure
Port = 5432
Protocol = 9.0
ReadOnly = Yes
RowVersioning = No
ShowSystemTables = No
ShowOidColumn = No
FakeOidIndex = No
ConnSettings =
Debug = 0
From the R prompt, execute the following commands to connect to the DHIS2 database.
> library(RODBC)
> channel<-odbcConnect("dhis2")#Note that the name must match the ODBC connector name
> sqlTest<-c("SELECT dataelementid, name FROM dataelement LIMIT 10;")
> sqlQuery(channel,sqlTest)
name
1 OPD First Attendances Under 5
2 OPD First Attendances Over 5
3 Deaths Anaemia Under 5 Years
4 Deaths Clinical Case of Malaria Under 5 Years
5 Inpatient discharges under 5
6 Inpatient Under 5 Admissions
7 Number ITNs
8 OPD 1st Attendance Clinical Case of Malaria Under 5
9 IP Discharge Clinical Case of Malaria Under 5 Years
10 Deaths of malaria case provided with anti-malarial treatment 1 to 5 Years
>
370
4 DHIS2 and R integration 4.3 Using ODBC to retrieve data from DHIS2 into R
As an illustrative example, lets say we have been asked to calculate the relative percentage of
OPD male and female under 5 attendances for the last twelve [Link], lets create an SQL
query which will provide us the basic information which will be required.
We have stored the result of the SQL query in an R data frame called “OPD”. Lets take a look at
what the data looks like.
> head(OPD)
startdate de sum
1 2011-12-01 Attendance OPD <12 months female 42557
2 2011-02-01 Attendance OPD <12 months female 127485
3 2011-01-01 Attendance OPD 12-59 months male 200734
4 2011-04-01 Attendance OPD 12-59 months male 222649
5 2011-06-01 Attendance OPD 12-59 months male 168896
6 2011-03-01 Attendance OPD 12-59 months female 268141
> unique(OPD$de)
[1] Attendance OPD <12 months female Attendance OPD 12-59 months male
[3] Attendance OPD 12-59 months female Attendance OPD >5 years male
[5] Attendance OPD <12 months male Attendance OPD >5 years female
6 Levels: Attendance OPD 12-59 months female ... Attendance OPD >5 years male
>
We can see that we need to aggregate the two age groups (< 12 months and 12-59 months) into a
single variable, based on the gender. Lets reshape the data into a crosstabulated table to make
this easier to visualize and calculate the summaries.
>[Link]<-cast(OPD,startdate ~ de)
>colnames([Link])
[1] "startdate" "Attendance OPD 12-59 months female"
[3] "Attendance OPD 12-59 months male" "Attendance OPD <12 months female"
[5] "Attendance OPD <12 months male" "Attendance OPD >5 years female"
[7] "Attendance OPD >5 years male"
We have reshaped the data so that the data elements are individual columns. It looks like we
need to aggregate the second and fourth columns together to get the under 5 female
attendance, and then the third and fifth columns to get the male under 5 [Link] this,
lets subset the data into a new data frame just to get the required information and display the
results.
> [Link]$OPDUnder5Female<-[Link][,2]+[Link][,4]#Females
> [Link]$OPDUnder5Male<-[Link][,3]+[Link][,5]#males
> [Link]<-[Link][,c(1,8,9)]#new summary data frame
>[Link]$FemalePercent<-
[Link]$OPDUnder5Female/
371
4 DHIS2 and R integration 4.4 Mapping with R and PostgreSQL
([Link]$OPDUnder5Female + [Link]$OPDUnder5Male)*100#Females
>[Link]$MalePercent<-
[Link]$OPDUnder5Male/
([Link]$OPDUnder5Female + [Link]$OPDUnder5Male)*100#Males
Of course, this could be accomplished much more elegantly, but for the purpose of the
illustration, this code is rather [Link], lets display the required information.
> [Link][,c(1,4,5)]
startdate FemalePercent MalePercent
1 2011-01-01 51.13360 48.86640
2 2011-02-01 51.49154 48.50846
3 2011-03-01 51.55651 48.44349
4 2011-04-01 51.19867 48.80133
5 2011-05-01 51.29902 48.70098
6 2011-06-01 51.66519 48.33481
7 2011-07-01 51.68762 48.31238
8 2011-08-01 51.49467 48.50533
9 2011-09-01 51.20394 48.79606
10 2011-10-01 51.34465 48.65535
11 2011-11-01 51.42526 48.57474
12 2011-12-01 50.68933 49.31067
We can see that the male and female attendances are very similar for each month of the year,
with seemingly higher male attendance relative to female attendance in the month of December.
In this example, we showed how to retrieve data from the DHIS2 database and manipulate in
with some simple R commands. The basic pattern for using DHIS2 and R together, will be the
retrieval of data from the DHIS2 database with an SQL query into an R data frame, followed by
whatever routines (statistical analysis, plotting, etc) which may be required.
A somewhat more extended example, will use the RPostgreSQL library and several other libraries
to produce a map from the coordinates stored in the database. We will define a few helper
functions to provide a layer of abstraction, which will make the R code more reusable.
372
4 DHIS2 and R integration 4.4 Mapping with R and PostgreSQL
373
4 DHIS2 and R integration 4.4 Mapping with R and PostgreSQL
Up until this point, we have defined a few functions to help us make a map. We need to get the
coordinates stored in the database and merge these with the indicator which we plan to map. We
then retrieve the data from the aggregated indicator table, create a special type of data frame
(SpatialPointsDataFrame), apply some styling to this, and then create the plot.
374
4 DHIS2 and R integration 4.5 Using R, DHIS2 and the Google Visualization API
In this example, we showed how to use the RPostgreSQL library and other helper
libraries(Maptools, ColorBrewer) to create a simple map from the DHIS2 data mart.
Google’s Visualization API provides a very rich set of tools for the visualization of multi-
dimensional data. In this simple example, we will show how to create a simple motion chart with
the Google Visualization API using the “googleVis” R package. Full information on the package can
be found here.. The basic principle, as with the other examples, is to get some data from the
DHIS2 database, and bring it into R, perform some minor alterations on the data to make it easier
to work with, and then create the chart. In this case, we will compare ANC1,2,3 data over time
and see how they are related with a motion chart.
375
4 DHIS2 and R integration 4.5 Using R, DHIS2 and the Google Visualization API
port<-"5432"
dbname<-"dhis2_demo"
con <- dbConnect(PostgreSQL(), user= user,
password=password,host=host, port=port,dbname=dbname)
#Let's retrieve some ANC data from the demo database
sql<-"SELECT [Link] as province,
[Link] as indicator,
extract(year from [Link]) as year,
[Link]
FROM aggregatedindicatorvalue a
INNER JOIN organisationunit ou on [Link] = [Link]
INNER JOIN indicator i on [Link] = [Link]
INNER JOIN period p on [Link] = [Link]
WHERE [Link] IN
(SELECT DISTINCT indicatorid from indicator where shortname ~*('ANC [123] Coverage'))
AND [Link] IN
(SELECT DISTINCT idlevel2 from _orgunitstructure where idlevel2 is not null)
AND [Link] = (SELECT DISTINCT periodtypeid from periodtype where name = 'Yearly')"
#Store this in a data frame
anc<-dfFromSQL(con,sql)
#Change these some columns to factors so that the reshape will work more easily
anc$province<-[Link](anc$province)
anc$indicator<-[Link](anc$indicator)
#We need the time variable as numeric
anc$year<-[Link]([Link](anc$year))
#Need to cast the table into a slightly different format
anc<-cast(anc,province + year ~ indicator)
#Now, create the motion chart and plot it
M<-gvisMotionChart(anc,idvar="province",timevar="year")
plot(M)
376
4 DHIS2 and R integration 4.6 Using PL/R with DHIS2
Using packages like brew or Rapache, these types of graphs could be easily integrated into
external web sites. A fully functional version of the chart shown above can be accessed here.
The procedural language for R is an extension to the core of PostgreSQL which allows data to be
passed from the database to R, where calculations in R can be performed. The data can then be
passed back to the database for further processing.. In this example, we will create a function to
calculate some summary statistics which do not exist by default in SQL by using R. We will then
create an SQL View in DHIS2 to display the results. The advantage of utilizing R in this context is
that we do not need to write any significant amount of code to return these summary statistics,
but simply utilize the built-in functions of R to do the work for us.
First, you will need to install PL/R, which is described in detail here.. Following the example from
the PL/R site, we will create some custom aggregate functions as detailed here. We will create
two functions, to return the median and the skewness of a range of values.
377
4 DHIS2 and R integration 4.7 Using this DHIS2 Web API with R
finalfunc = r_median
);
Next, we will define an SQL query which will be used to retrieve the two new aggregate functions
(median and skewness) which will be calculated using R. In this case, we will just get a single
indicator from the data mart at the district level and calculate the summary values based on the
name of the district which the values belong to. This query is very specific, but could be easily
adapted to your own database.
SELECT [Link],avg([Link]),
median([Link]),skewness([Link]) FROM aggregatedindicatorvalue dv
INNER JOIN period p on [Link] = [Link]
INNER JOIN organisationunit ou on
[Link] = [Link]
WHERE [Link] = 112670
AND [Link] = 3
AND [Link] = 3
AND [Link] >='2009-01-01'
GROUP BY [Link];
We can then save this query in the form of SQL View in DHIS2. A clipped version of the results are
shown below.
In this simple example, we have shown how to use PL/R with the DHIS2 database and web
interface to display some summary statistics using R to perform the calculations.
DHIS2 has a powerful Web API which can be used to integrate applications together. In this
section, we will illustrate a few trivial examples of the use of the Web API, and how we can
retrieve data and metadata for use in R. The Web API uses basic HTTP authentication (as
described in the Web API section of this document). Using two R packages “RCurl” and “XML”, we
378
4 DHIS2 and R integration 4.7 Using this DHIS2 Web API with R
will be able to work with the output of the API in R. In the first example, we will get some
metadata from the database.
url<-paste0([Link],"api/reportTables/KJFbpIymTAo/[Link]",authenticate(username,password))
mydata<-GET(url) %>% content(.,"text/csv")
head(mydata)
Here, we have shown how to get some aggregate data from the DHIS2 demo database using the
DHIS2’s Web API.
In the next code example, we will retrieve some metadata, namely a list of data elements and
their unique identifiers.
#Get the list of data elements. Turn off paging and only get a few attributes.
require(httr)
username<-"admin"
password<-"district"
[Link]<-"[Link]
login<-function(username,password,[Link]) {
url<-paste0([Link],"api/me")
r<-GET(url,authenticate(username,password))
if(r$status == 200L) { print("Logged in successfully!")} else {print("Could not login")}
}
getDataElements<-function([Link]) {
url<-paste0([Link],"api/dataElements?fields=id,name,shortName")
r<-content(GET(url,authenticate(username,password)),as="parsed")
[Link]([Link],r$dataElements)
}
login(username,password,[Link])
data_elements<-getDataElements([Link])
head(data_elements)
The object data_elements should now contain a data frame of all data elements in the system.
name id
2 Accute Flaccid Paralysis (Deaths < 5 yrs) FTRrcoaog83
210 Acute Flaccid Paralysis (AFP) follow-up P3jJH5Tu5VC
3 Acute Flaccid Paralysis (AFP) new FQ2o8UBlcrS
4 Acute Flaccid Paralysis (AFP) referrals M62VHgYT2n0
5 Additional notes related to facility uF1DLnZNlWe
6 Admission Date eMyVanycQSC
379
Categories and category option combinations in DHIS2 analytics are critical for structuring data queries as they allow for detailed data aggregation and filtering. Categories can dynamically add dimensions to the data model, enhancing flexibility . The system uses category option combinations to reference aggregate data elements in expressions, allowing data elements to be reported across various categories . This is particularly relevant when defining expressions for indicators and reports where specific combinations of data elements and their categories are necessary . Wildcards in category option combinations offer additional flexibility by allowing queries to include all category values or specific attributes . Furthermore, category option combos can be updated to maintain database integrity, ensuring that missing combinations are generated, which supports accurate data analysis .
Partial updates in DHIS2 involve modifying only a subset of an entity's properties rather than replacing the entire entity, which is what occurs during a full update. This approach can be particularly beneficial as it minimizes the data payload required for updates, enhancing the efficiency of network utilization and reducing processing overhead on the server . By employing partial updates, the process becomes more efficient, especially when dealing with large datasets or limited bandwidth scenarios, thereby optimizing resources and potentially reducing latency in data synchronization . The implementation of partial updates aligns with RESTful principles, using the HTTP PATCH method to achieve these targeted modifications .
DHIS2 manages message conversations through Web API endpoints that allow users to send, read, and manage messages. When a message conversation is removed for a user, it does not delete the message content itself but rather removes the association of the message with the user. This means the message thread will no longer appear in the user's message list, simplifying inbox management. However, the underlying message remains in the system, highlighting a separation between user visibility and message existence .
The 'preAggregationMeasureCriteria' query parameter in DHIS2's analytics API filters data before aggregation is performed. It allows users to define conditions that data must meet prior to aggregation, ensuring only data within specified ranges are aggregated. For example, it can specify that only data with values greater than or equal to 10 and less than 100 should be considered in the aggregation process . This pre-filtering can affect the outcome of the aggregation by excluding outlier data points or erroneous data values that do not meet predefined criteria .
In DHIS2, you can validate a data element payload by sending it to the schema endpoint using the /api/schemas/dataElement URL. It is crucial to ensure that all required fields such as 'type', 'aggregationOperator', 'domainType', and 'shortName' are included. If any of these fields are missing, the validation will return a message indicating the missing property .
In DHIS2, email notifications are sent through configuring the necessary SMTP settings such as smtpPassword and smtpServer details. Additionally, for notifications to be correctly sent, the remoteServerUrl, remoteServerUsername, and remoteServerPassword may be configured as part of the configuration service, ensuring that the DHIS2 instance can communicate with the email server. These settings ensure the reliable sending of email notifications, crucial for system alerts and user feedback .
The Visualizer chart plug-in in DHIS2 is used for embedding high-quality charts into web pages. It allows users to reference existing charts using their ID or to dynamically configure new charts by specifying data dimensions such as indicators, periods, and organization units in columns, rows, and filters arrays. The charts can be customized with several options including types (e.g., line, column), titles, regression types, and axes properties. This functionality utilizes the jQuery and chart plug-in JavaScript libraries for rendering and can display additional features like target lines and baselines .
The 'outputIdScheme' parameter in DHIS2's analytics API is essential for customizing the format of data identifiers in the responses. It allows users to choose between different identifier schemes, such as using human-readable names rather than codes or UIDs. For example, setting `outputIdScheme=NAME` enables the response to contain names of data elements instead of their IDs, facilitating easier interpretation and processing by end-users . This feature is particularly useful when exporting data for external reporting or when the data needs to be reviewed in a more understandable format for stakeholders without technical expertise. The parameter affects all metadata objects as part of the response, and different identifier schemes can be specified, like `CODE` for exporting data element operands . Overall, 'outputIdScheme' enhances data accessibility and usability by providing flexibility in data presentation .
Using the 'ignoreEmptyCollection' parameter when updating tracked entity instances in DHIS2 allows existing attributes or relationships to remain unchanged even if they are not included in the payload. Without this parameter, any attributes or relationships not provided in the update payload would be removed from the system. By setting 'ignoreEmptyCollection' to true, you can prevent unintended deletions when intentionally submitting a payload without certain attributes or relationships . This feature provides greater control and flexibility during updates by safeguarding existing data that you do not wish to modify or remove .
The 'metadata export' functionality in DHIS2 is crucial for managing and exchanging metadata configurations between instances, allowing the selective export and import of metadata to ensure consistency and synchronization across systems. It supports various parameters that enable granular control over the export process . For instance, metadata can be filtered by specific types like indicators, indicator groups, data elements, and others using parameters such as `indicators=true` and `indicatorGroups=true` . Another parameter, `fields`, allows selection of specific fields like `id` and `name` for export, while `order` can dictate the sorting order of exported data, e.g., by `displayName` . Additionally, the `skipSharing` parameter, when enabled, strips shared properties from the exported metadata objects, and the `download` parameter allows the metadata to be exported as a downloadable file . These parameters ensure efficient and controlled exporting of metadata, facilitating sharing and integration processes across different DHIS2 environments while minimizing data inconsistencies.