This repository was archived by the owner on Apr 1, 2025. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 178
This repository was archived by the owner on Apr 1, 2025. It is now read-only.
The vm-based Application Runtime Plugin Architecture #111
Copy link
Copy link
Closed
Description
To deploy vm-based application into cloud platform such as QingCloud, AWS, OpenStack etc, we need to deploy two kinds of vm-based clusters.
- metadata cluster with metad (etcd as backend store) installed
- application cluster with confd installed
Where metadata cluster provides the meta data service for user's applications. Confd is the auto configuration daemon running in the application cluster instances and updates application configurations based on the info from metadata service.
Things need to design
- Where to deploy metadata cluster? Is it per cloud? or per user? or per cloud and per user?
- How OpenPitrix system, metad and confd. communicate?
- Multi-tenancy support
There are two possible solutions for the communications among components.
- Wrap confd and metad into a REST-based service so they can send requests back and forth. For security, the rest service requires certificate to authorise requests.
- Generate ssh key for OpenPitrix runtime subsystem, create metadata vm and application cluster vm with the key pair, so the runtime can execute cmd through ssh without password.
A case: Create application cluster steps:
- User starts to deploy an application.
- Runtime service first checks if metadata service created or not, if not, creates metadata service first in the background.
- Runtime service creates application cluster, starts confd daemon on each instance.
- Runtime service registers the application cluster info to metadata service.
- Confd in each cluster instance watches the changes of the metadata info and starts to refresh its configuration, and exec reload cmd if appropriate.
- Runtime service registers application init and start cmd (the cmd to make application working) into metadata service. The cluster then executes the cmd from the metadata service. After everything is finished successfully, OpenPitrix transforms the application cluster to the status "active".
Reactions are currently unavailable