The FIWARE Data Space Connector (FIWARE DSC) is a data space connector resulting from the integration of open-source software components that are part of the FIWARE Dataspace Components (FDC) and the Eclipse Dataspace Components (EDC). Every organization participating in a data space can deploy it to "connect" to a data space, acting as data (processing) service provider, consumer of data (processing) services, or both.
This repository provides a description of the FIWARE Data Space Connector, its technical implementation and deployment recipes.
Table of Contents
- Overview
- Release Information
- Components
- Description of modules and interaction flows
- Deployment
- Testing
- Additional documentation and resources
The FIWARE DSC currently integrates the following frameworks:
- Authentication Framework (Identity and trust management) based on OID4VC: facilitates the authentication of participating organizations and their users (end users, devices, software agents including AI agents) using W3C DIDs and Verifiable Credentials, relying on trust mechanisms compatible with EBSI specifications and Gaia-X recommendations. Because it is based on the OIDC family of protocols, the authentication module supports H2M (Human-to-Machine) and M2M (Machine-to-Machine) interaction schemes and has been adopted as a mandatory protocol within the EU Digital Identity initiative (see section 4.2.1 "Interfaces and protocols" in the EUDI Wallet Architecture and Reference Framework specifications).
- Authorization Framework (Policy enforcement): determines, based on policies defined following the W3C ODRL standard, whether a given authenticated consumer may access a given service, by evaluating consumer credentials, the requested service, data properties, and environment parameters
- Product Catalog and Contracting Management Framework: manages, based on standards defined by TM Forum, the catalog of product specifications and offers, product negotiation and ordering processes, and product inventory
- EDC Framework: implements the Eclipse/IDSA Data Space Protocol (DSP) to manage catalog access, product contracting, and transfer process control, with either OID4VC or Eclipse DCP configurable as authentication protocols
- Marketplace Portal: provides a graphical web interface for managing product specifications, offers, contracting, and inventory, based on the FIWARE BAE Marketplace
At the data exchange and service invocation layer, the FIWARE DSC is prepared to manage access to any HTTP-based interface. While it provides built-in compatibility with ETSI NGSI-LD as data exchange API, it can also mediate access to services using S3, NGSIv2, web portal interfaces, A2A or MCP for AI agent functionalities, and any other REST API.
Technically, the FIWARE Data Space Connector is a Helm Umbrella-Chart, containing all the sub-charts and their dependencies for deployment via Helm. It can be deployed using configurable Helm Charts in different environments that support Kubernetes.
The FIWARE Data Space Connector uses a continuous integration flow, where every merge to the main branch triggers a new release. Versioning follows Semantic Versioning 2.0.0, therefore only major changes will contain breaking changes. Important releases will be listed below, with additional information linked:
- 8.x.x - Update the FIWARE Data Space Connector from 7.x.x to 8.x.x
The following diagram shows a logical overview of the different components of the FIWARE Data Space Connector.
Precisely, the connector bundles the following components:
| Umbrella component | Sub-umbrella component | Component | Role | Diagram field |
|---|---|---|---|---|
| decentralized-iam | vc-authentication | VCVerifier | Validates VCs and exchanges them for tokens | Verifier |
| credentials-config-service | Holds the information which VCs are required for accessing a service | PRP/PAP (authentication) | ||
| trusted-issuers-list | Acts as Trusted Issuers List by providing an EBSI Trusted Issuers Registry API | Local Trusted Issuers List | ||
| odrl-authorization | APISIX | APISIX as API-Gateway with a OPA plugin | PEP | |
| OPA | OpenPolicyAgent as the API Gateway's Sidecar | PDP | ||
| odrl-pap | Allowing to configure ODRL policies to be used by the OPA | PRP/PAP (authorization) | ||
| - | - | Keycloak | Issuer of VCs on the Consumer side | |
| - | - | Scorpio | Context Broker | |
| - | - | tmforum-api | Implementation of the TMForum APIs for handling contracts | Contract Management |
| - | - | contract-management | Notification listener for contract management events out of TMForum | Contract Management |
| - | - | PostgreSQL | PostgreSQL Database with PostGIS extensions |
Note, that some of the components shown in the diagram above are not implemented yet.
This section provides a detailed description of each of the modules that make up the FIWARE DSC and their interaction flows.
This framework supports authentication mechanisms based on decentralized identity management built on W3C standards (DID, Verifiable Credentials). It implements the OID4VC family of protocols defined by the OpenID Foundation for the exchange of VCs (SIOPv2, OID4VP). This module allows organizations or users within those organizations that hold the required VCs to authenticate against the connector and obtain a valid JWT token with which they can invoke either the connector's own services (e.g., TM Forum APIs, DSP-based control plane) or services linked to products exposed through the connector.
The framework consists of the following components:
- VCVerifier: implements credential verification functions using the OID4VP protocol with the user-side system that stores credentials (a digital wallet in the case of end users, another storage system in the case of software agents such as AI agents, devices, robots, etc.)
- credentials-config-service: maintains the configuration of which VCs, containing which roles/claims, the VCVerifier must request for each product/service
- trusted-issuers-list: maintains a registry of organizations (identified by their DID) that are considered trusted issuers of certain classes of VCs containing certain roles/claims. Provides an EBSI Trusted Issuers Registry compatible API
The following figure illustrates the sequence of steps followed when a user authenticates using OID4VC against a web application/portal through which the user accesses services implemented by a given product exposed through the connector, and how the backend-for-frontend (BFE) components invoke the backend system's REST APIs using JWTs containing information from verifiable credentials linked to the user:
Steps
- The user tries to access a page linked to a protected service of the web application or clicks the login button when there is not yet an authenticated session.
- The login/session BFE component detects the absence of a session, determines the resource to be accessed (scope), and starts the login process. The BFE creates a pre-authentication context (state, nonce) and redirects (302) the browser to the Verifier's start page, passing the client_id, state, nonce, scope, and response_type (code).
- The browser navigates to the Verifier's start page. The Verifier validates the input parameters, creates a transaction identifier (tx_id), and generates the protocol correlation artifacts (state, nonce, expiration).
- The Verifier responds by serving its own HTML page showing a QR code. The QR encodes a minimal request that includes the verifier's client_id and a request_uri. The Verifier page starts a status polling loop to detect when authentication has been completed.
- The user scans the QR code with their mobile wallet. The wallet extracts the request_uri and knows against which verifier it is operating.
- The wallet invokes the Verifier's request_uri.
- The Verifier consults the credentials_config_service, which establishes which VCs and which claims/roles to require when someone attempts to authenticate against the service identified by the client id (BFE id) and scope.
- The Verifier generates a Request Object that it returns to the wallet. This includes the client_id, response_type, response_mode=direct_post, response_uri, state, nonce, and the dcql_query specifying which VCs and claims/roles must be requested.
- The wallet shows the user which organization requests which data and for what purpose. If the user consents, the wallet selects the required credentials and generates the OID4VP response.
- The wallet sends the OID4VP response to the Verifier's response_uri using direct_post, including the vp_token and, where applicable, the SIOPv2 id_token.
- The Verifier cryptographically validates the presentation (signature, holder binding, expiration, revocation) and verifies that all requested VCs are included, including the specified claims/roles, and whether those VCs have been signed (a) by an organization that is a participant in the data space and (b) is a trusted issuer of those VCs in the global or local trusted_issuers_list.
- If verification is successful, the Verifier generates the JWT and creates an authorization code associated with the transaction. The status endpoint returns the completed condition to the QR page.
- Upon detecting completion, the Verifier page redirects to the BFE callback with the authorization code and state.
- The BFE validates the state and invokes the Verifier's token endpoint to obtain the JWT, passing the authorization code.
- The Verifier verifies the transaction is completed and that the authorization code is valid, then returns the JWT access token.
- The BFE creates the user's web session (e.g., a server-side session referenced by an HttpOnly cookie) and associates with that session the JWT access token.
- From that point on, the browser operates only against BFE endpoints, sending the HttpOnly cookie with each request. The browser does not see or manage JWTs.
- When a page needs business data, the browser calls the business BFE components. These retrieve the JWT access token associated with the session.
- The business BFE components invoke the APIs by providing the JWT in the Authorization header:
Bearer <JWT>. If the JWT expires, the BFE obtains a new JWT from the Verifier or restarts the verification flow if the policy requires a step-up.
The following figure illustrates the authentication process in the M2M scenario:
Steps
- An application to which a given organization has assigned the VCs necessary to access the services offered by a given provider requests authentication by sending a request to a connection point offered by the Verifier component.
- In its response, the Verifier component returns a request to the application for it to send (a) the VCs proving that the application has the VCs required by the services it intends to access and (b) any other VCs deemed necessary (steps 2-3).
- The application checks whether the Verifier is linked to an organization that is a trusted participant in the data space (step 4, necessary to prevent any agent from attempting to impersonate the Verifier) and, if so, sends the requested VCs by making a request to an access point specified by the Verifier (step 5).
- According to its configuration, the Verifier component verifies whether the organization that issued the VCs linked to the requested services is a trusted organization in the data space (step 6) and, moreover, whether it is a trusted issuer of those VCs (step 7.a). It also checks whether the rest of the VCs sent are signed by issuers trusted at the global data-space level (step 7.b).
- If verification is completed successfully, the Verifier component generates a JWT token that is transmitted to the application (step 8).
- Using the token it has been given, the application invokes a service (step 9).
It is important to emphasize that the authentication process (steps 1 to 19 in the H2M scenario and steps 1 to 8 in the M2M scenario) only needs to be carried out once. Once the access token has been obtained, the services can be invoked multiple times. The authentication process only needs to be repeated when the access token expires.
A detailed description of the steps to be performed by client applications and service providers can be found in the Service Interaction (M2M) documentation.
This framework implements an ABAC (Attribute Based Access Control) authorization architecture based on policies defined on:
- claims linked to users' VCs (applications or natural persons) within JWTs obtained in the authentication process required prior to service invocation,
- the operation being invoked,
- specific fields of the data to be accessed or processed (referenced in the path included in the operation request or carried in the payload),
- environment/context conditions.
The framework integrates components performing PEP, PDP, PIP, and PAP/PRP functions:
- Apache APISIX: essentially implements the PEP (Policy Enforcement Point) functions and can easily be configured to integrate elements implementing the PIP functions. Uses the OPA plugin for policy decisions.
- OPA (Open Policy Agent): capable of interpreting and applying policies expressed in the ODRL language defined by W3C, implementing the PDP (Policy Decision Point) functions. The authorization module is capable of processing policies based on the Gaia-X ODRL VC profile defined by Gaia-X.
- ODRL-PAP: allows the configuration of ODRL policies interpretable by the OPA engine (implementing the PAP/PRP functions), making it possible to define ODRL-based authorization policies in a simplified manner.
The following figure illustrates the steps implemented by the Authorization Framework when a request directed to a REST API exposed through the connector is received. The scheme is the same for both H2M and M2M scenarios:
Steps
Once the authentication phase has been completed and an access token has been obtained:
- The PEP component (APISIX) receives a request for a service whose access is subject to policy enforcement.
- The PEP component (APISIX) extracts verifies the JWT and invokes the OPA component, passing this JWT as well as information about the received request such as the type of operation, path, and input payload.
- The PDP component (OPA) checks, based on the information delivered by the PEP component (APISIX) and contextual information obtained from a PIP service, whether the request can be authorized, taking into account the policies defined by the product provider and configured through the ODRL-PAP component.
- The PDP component (OPA) returns to the PEP component (APISIX) the decision on whether to authorize the request.
- If the request is determined to be authorizable, the PEP component (APISIX) forwards the request to the service.
- The PEP component (APISIX) forwards the response to the request to whoever originally invoked it.
- Depending on the configuration, the PEP component (APISIX) can record in the logging system the information carried with the request, whether or not that request was rejected, the policies governing that decision, and the returned value.
This framework relies on two components:
- TMForum-API: implements access, using standard TM Forum Open APIs, to the Catalog of Product Specifications and Offers, the Product negotiation and contracting (ordering) processes, and the Product Inventory.
- Contract-Management: subscribes to certain notifications generated by the tmforum-api component in order to implement integration actions with other frameworks/components of the connector.
TM Forum maintains and evolves a set of API specifications (TM Forum Open APIs) on which to base the development of systems that support the business and operational processes of any provider of digital products and services. These specifications have been adopted within the DOME project (Distributed Open Marketplace for Europe), a strategic EU project under the Digital Europe programme.
Using the TM Forum Open APIs, a participant playing the provider role can manage Catalogs of ProductOfferings around ProductSpecifications. The specification of a Product comprises a set of ServiceSpecifications as well as ResourceSpecifications that need to be deployed to support the execution of the specified services. Among the characteristics of a ProductOffering defined around a ProductSpecification are the terms and conditions (productOfferingTerm) or pricing models (productOfferingPrice) that the Provider making the offer wishes to apply.
Using the TM Forum Open APIs, customers can place orders for the acquisition of products (ProductOrders). When a Customer places a ProductOrder, it may do so by accepting the characteristics established by default in the ProductOffering published by a Provider, but the Customer may also negotiate new terms and conditions by creating a "term proposal" (Quote) that it submits as an input argument in its ProductOrder, thus entering into a negotiation until they reach a Quote that both parties accept. When a ProductOrder reaches completed status, a Product entity is created that represents the product effectively provisioned for the Customer. The Products, Services, and Resources contracted by a Customer will be recorded in corresponding inventories that the Customer can consult and manage.
It is important to note that the TM Forum Open APIs implemented by the TMForum-API component are protected through the connector's authentication and authorization mechanisms when they are accessed from outside.
The Contract-Management component subscribes to notifications issued by the TMForum-API component when a given ProductOrder has been completed. When this happens, it registers the DID of the participant that requested that ProductOrder in the connector's local trusted_issuers_list as a trusted issuer of the VCs and claims/roles specified for the product. At the same time, it provisions in the ODRL-PAP the policies that must be applied for the product.
Steps
- The PEP component (APISIX) receives a ProductOrder request.
- The PEP component (APISIX) verifies the JWT accompanying the request and, based on the information in the JWT together with information about the requested operation (ProductOrder), determines whether the request should be handled, relying on the PDP component (OPA), which applies the ODRL policies governing access to the TM Forum APIs.
- Upon receiving the ProductOrder request, a ProductOrder object is created and the process of provisioning and activating the associated services and resources begins. The product provider may manage this process manually or the process may be automatic.
- Once those provisioning and activation processes are completed, the status of the ProductOrder object changes to "Completed" and the contract-management component receives a notification with the ProductOrder information, which, among other things, contains the DID of the organization (Customer) that created the ProductOrder.
- The Contract-Management component registers the DID of the Customer organization in the local trusted-issuers-list and the necessary policies are provisioned in the ODRL-PAP component.
The EDC framework integrated as part of the FIWARE DSC enables it to support access to the catalog of product specifications, the product contracting process, as well as control over the start, suspension, resumption, and termination of exchange processes between consumers and services of contracted products using the Eclipse/IDSA Data Space Protocol (DSP). DSP prescribes a set of operations, message types, and HTTPS APIs (bindings) aimed at interoperability. In practice, DSP organizes the interaction between Consumer and Provider as a "publish -> negotiate -> access" sequence, structured into three protocols: Catalog, Contract Negotiation, and Transfer Process. Given the asynchronous nature of the operations, the use of the DSP protocol requires that a connector be deployed on both sides, consumer and provider.
The FDSC-EDC component provides:
- Catalog Protocol — discovery of products in the catalog via DCAT
- Contract Negotiation Protocol — stateful negotiation of usage contracts for datasets
- Transfer Process Protocol — orchestration of data access (pull/push) once a valid agreement exists
- Uses the TMForum API as storage backend
- Two authentication flavors: OID4VC and Eclipse DCP
The exchange of HTTP messages linked to DSP operations requires an access token. In the FIWARE DSC, both the family of OID4VC protocols and the Eclipse DCP (Decentralized Claims Protocol) are supported.
Steps
- The connector on the consumer side obtains a Self-Issued Token from its Identity Hub (STS-Service), which includes an ID-Token and an Access Token.
- The connector on the consumer side sends a request to the DSP endpoint (EDC Framework) of the FIWARE DSC including both tokens.
- The FIWARE DSC's EDC implementation obtains the DID carried in the ID-Token and requests the corresponding did-document by accessing the Consumer's IdentityHub.
- The connector on the consumer side returns the did-document, which contains the endpoint (address) of the Credential Service implemented in the Identity Hub on the consumer side.
- The FIWARE DSC's EDC implementation on the provider side requests the credentials required to access the DSP operation invoked in step 2 by sending a request to the Credential Service implemented in the consumer's Identity Hub, using the Access Token that the consumer sent in step 1.
- The Credential Service returns a Verifiable Presentation containing the requested credentials.
- The FIWARE DSC's EDC implementation on the provider side verifies that all the credentials requested in step 5 have been presented within the Verifiable Presentation sent in step 6, confirming that the issuers of those credentials are trusted issuers and, therefore, that their DIDs are registered in the local trusted-issuer-list.
The integration of the EDC framework into the FIWARE DSC entails mapping concepts linked to the DSP protocol to concepts implemented following the TM Forum information model in the Product Catalog and Contracting Management Framework. In this way, a product whose specifications have been registered through that framework can also be contracted through the DSP protocol.
The Catalog protocol covers the discovery of products in the catalog. A Provider exposes a Catalog Service that publishes metadata about its assets (Datasets) and the conditions under which they are available. The Consumer uses that service to obtain a complete catalog or consult a specific dataset and, with that information, decides whether to start a contractual negotiation.
DSP reuses existing RDF vocabularies. In particular, the catalog is expressed with DCAT, and the terms of use (offer) are expressed as ODRL Offers associated with the Dataset.
The following figure shows the correspondence between the entities handled in the TM Forum model and in the DSP Catalog subprotocol:
- the DCAT:Catalog entity maps directly to the TMForum:Catalog entity
- the "participantId" attribute in a DCAT:Catalog maps to the id of a TMForum:RelatedParty entity with the role "Provider"
- the DCAT:Dataset entity maps directly to the TMForum:ProductSpecification entity, and the linked details (metadata) map to values in that entity's "productSpecCharacteristics" field
- the DCAT:Offer entity maps directly to the TMForum:ProductOffering entity, which contains fields with the policies
The Contract Negotiation protocol allows Consumer and Provider to agree, in a traceable and controlled manner, on a usage contract for a Dataset. In DSP, a negotiation is a stateful process identified by an IRI, and both participants must maintain a coherent view of that state.
The following figure shows the correspondence between the entities handled in the TM Forum model and in the DSP Contract Negotiation subprotocol:
- a ContractNegotiation entity corresponds to a TMForum:Quote object
- an Offer under negotiation (proposed by the consumer or counter-proposed by the provider) corresponds to a TMForum:QuoteItem object
- a TMForum:ProductOrder entity is created when a ContractNegotiation reaches verified status
- the Product and Agreement entities are created when the ContractNegotiation reaches finalization status
Steps
- Offer selection: the Consumer consults the Provider's catalog, chooses a Dataset, and selects an Offer (ODRL) that expresses the terms of use. In addition, it identifies in the DataService the URL of the Contract Negotiation endpoint.
- Negotiation initiation: the Consumer creates a new negotiation (IRI) and sends the Provider a Contract Request Message with the reference to the Dataset and the proposed Offer. The Provider responds with an ACK.
- Proposal and counteroffer: the Provider evaluates the request against its internal policies. If it accepts the Offer as is, it can move toward agreement; if it requires changes, it sends a Contract Offer Message with the counteroffer. This phase can iterate until convergence or termination.
- Acceptance by the Consumer: when the Consumer accepts the last Offer sent by the Provider, a point is reached at which the Provider can materialize a formal Agreement (contract), normally as an instance of an ODRL Agreement derived from the agreed Offer.
- Issuance of the agreement: the Provider sends the Consumer a Contract Agreement Message with the resulting Agreement. The Consumer responds with an ACK and proceeds to verify the consistency of the agreement.
- Verification and finalization: DSP contemplates Agreement verification messages. The Consumer sends a verification to the Provider; the Provider responds with ACK and may issue a final completion message. After this closure, the Agreement is "finalized" and the Dataset is considered available to the Consumer under the agreed conditions.
- Result: the main output of Contract Negotiation is an identified Agreement (contract agreement id). That identifier will be the reference that the Consumer uses next to initiate a Transfer Process associated with the agreed Dataset/Distribution.
The Transfer protocol orchestrates effective access to the Dataset once there is a valid Agreement. It is important to distinguish two planes: the control plane (where DSP messages are exchanged and states are managed) and the data plane (where the actual transport of data takes place through a specific wire protocol, typically outside DSP's scope).
The Consumer should not initiate a Transfer Process without having the finalized Agreement available. In practice, the Transfer Request includes a reference to the contractAgreementId obtained in the previous step, so that the Provider can validate that the transfer request is authorized and subject to the agreed policy.
Steps
- Preparation: the Consumer selects the Dataset Distribution and obtains from the catalog the Provider's Transfer Process endpoint. It also retrieves the
contractAgreementIdresulting from Contract Negotiation. - Transfer request: the Consumer creates a new Transfer Process (IRI) and sends a Transfer Request Message to the Provider, including the reference to the
contractAgreementId, the reference to the Dataset/Distribution, and parameters describing the type of transfer. In a 'pull transfer', the Provider must return to the Consumer the information necessary for it to obtain the data. In a 'push transfer', the Consumer provides a destination endpoint/location and the Provider initiates the sending. - ACK and state synchronization: the Provider responds with an ACK, confirming that it has accepted to process the request and that both participants share the identifier of the Transfer Process.
- Startup of the data plane: once the agreement and the usage policy have been validated, the Provider sends a Transfer Start Message. The actual transfer is executed on the data plane using a wire protocol (HTTP, S3, Kafka, etc.) agreed by profile or by the Distribution.
- Execution and monitoring: during the life of the process, suspension/resumption messages or operational events may be exchanged.
- Finalization: when the Provider considers the transfer completed (in finite transfers) or the connection/streaming established (in non-finite transfers), it issues a Transfer Completion Message and the Consumer responds with ACK. Alternatively, either party may terminate the process by means of a Transfer Termination Message.
- Result: the Consumer obtains effective access to the data (by download, stream, or push reception), maintaining the traceability of the Transfer Process and the reference to the Agreement that authorizes it.
From an architectural standpoint, the value of the Transfer Process lies in the explicit separation between coordination and transport: the control plane provides interoperability in state negotiation and access authorization, while the data plane can be optimized by domain provided that it respects the conditions of the Agreement.
Find out more in the Dataspace Protocol Integration Documentation.
The FIWARE DSC connector incorporates a marketplace portal based on the FIWARE BAE Marketplace that allows administrator users of participating organizations acting as providers to create catalogs of product specifications and offers around them. Likewise, through the portal, users linked to consumer organizations with the appropriate credentials can consult the specifications of products offered through the connector and the associated offers and, when they find a product of interest, contract it. They can also consult the inventory of products they have contracted.
Basically, the Marketplace Portal encapsulates access to the Product Catalog and Contracting Management Framework based on the TM Forum Open APIs, and enables end users, through a graphical web interface, to perform the operations that a system could perform by invoking the TM Forum APIs directly.
Find more information in the dedicated Marketplace Integration Section.
Integration with the Gaia-X Trust Framework
In order to be compatible with common european frameworks for Dataspaces, the FIWARE Data Space Connector provides integrations with the Gaia-X Trustframework. Gaia-X Digital Clearing House's can be used as Trust Anchors for the FIWARE Data Space Connector.
Find out more in the dedicated Gaia-X Integration Documentation.
The FIWARE Data Space Connector Repository provides a local deployment of a Minimum Viable Dataspace.
- Find a detailed documentation here: Local Deployment
This deployment allows to easily spin up a minimum data space on a local machine using Maven and Docker (with k3s), and can be used to try out the connector, to get familiar with the different components and flows within the data space, or to perform tests with the different APIs provided.
Additional deployment profiles are available for specific trust frameworks:
# Default local deployment
mvn clean deploy -Plocal
# With Gaia-X trust framework integration
mvn clean deploy -Plocal,gaia-x
# With support for the Data Space Protocol
mvn clean deploy -Plocal,dspThe Data-Space-Connector is a Helm Umbrella-Chart, containing all the sub-charts of the different components and their dependencies. Its sources can be found here.
The chart is available at the repository https://fiware.github.io/data-space-connector/. You can install it via:
# add the repo
helm repo add dsc https://fiware.github.io/data-space-connector/
# install the chart
helm install <DeploymentName> dsc/data-space-connector -n <Namespace> -f values.yamlNote, that due to the app-of-apps structure of the connector and the different dependencies between the components, a deployment without providing any configuration values will not work. Make sure to provide a
values.yaml file for the deployment, specifying all necessary parameters. This includes setting parameters of the connected data space (e.g., trust anchor endpoints), DNS information (providing Ingress or OpenShift Route parameters),
structure and type of the required VCs, internal hostnames of the different connector components and providing the configuration of the DID and keys/certs.
Configurations for all sub-charts (and sub-dependencies) can be managed through the top-level values.yaml of the chart. It contains the default values of each component and additional parameters shared between the components. The configuration of the applications can be changed under the key <APPLICATION_NAME>, please see the individual applications and their sub-charts for the available options.
Example — changing the image tag of Keycloak:
keycloak:
image:
tag: LATEST_GREATESTIn order to test the helm-charts provided for the FIWARE Data Space Connector, an integration-test framework based on Cucumber and JUnit 5 is provided: it.
The tests can be executed via:
mvn clean integration-test -PtestThey will spin up the Local Data Space and run the test-scenarios against it.
Additional and more detailed documentation about the FIWARE Data Space Connector, specific flows and its deployment and integration with other frameworks:
- Dataspace Protocol Integration
- Gaia-X Integration
- Marketplace Integration
- Central Marketplace
- Contract Negotiation
- Service Interaction (M2M)
- Contract Management flows
- Local Deployment
- Additional documentation
- Ongoing Work
Additional resources about the FIWARE Data Space Connector and Data Spaces in general:









