Deploy SynxDB Cloud
This guide provides a step-by-step walkthrough for installing the SynxDB Cloud, from the initial setup to accessing the console for the first time.
Step 1. Prerequisites
Before beginning the installation, ensure your environment is ready.
Environment and system requirements
Ensure your environment meets the following criteria before proceeding:
Operating system: A Linux server with proper network access.
Container runtime: Docker is installed on your server.
Kubernetes cluster: A running Kubernetes cluster that can:
Automatically generate an external hostname/IP for
LoadBalancertype services.Automatically provision
PersistentVolumesforPersistentVolumeClaims.
Object storage: An S3-compatible object storage service with proper read and write access is available.
Client tools:
kubectlandHelmare installed on your server. If Helm is not installed, follow the official instructions on the Helm website.
Obtain the installation package
The official installation package is required.
Action: Contact Synx Data Labs technical support to get the installation package.
Package size: About 5.7 GB
MD5 checksum:
fe2e0ea4559bacaee7c3dab39bdb76af
To ensure the package is complete and not corrupted, verify the checksum after downloading.
Configure Kubernetes CSIDriver
The installation requires a specific setting in the Kubernetes cluster’s CSI (Container Storage Interface) Driver.
Check the current CSIDriver configuration:
kubectl get csidriver
Edit the CSIDriver. Set
fsGroupPolicytoNone.Open the driver configuration for editing. For example, if the driver is named
named-disk.csi.cloud-director.vmware.com, use the following command:kubectl edit csidriver <your-csi-driver-name>
Update the
specsection. In the YAML file that opens, find thespecsection and add or modify thefsGroupPolicyline as shown below:spec: attachRequired: true fsGroupPolicy: None # <-- Adds or modifies this line. podInfoOnMount: false # ... other settings
Save and close the file to apply the changes.
Import Docker images
The installation package includes several Docker images that need to be loaded and pushed to a container image registry.
Load the images. The images are provided as
.tar.gzfiles in theimage/directory of the installation package. Load each one using thedocker loadcommand. For example:Note
<s3-compatible-oss>in the commands and configuration files is a placeholder. Replace it with the actual name of the object storage service you are using.docker load < image/foundationdb/fdb-kubernetes-operator-v2.10.0.tar.gz docker load < image/synxdb/synxdb-elastic-dbaas-1.0-RELEASE-117134.tar.gz docker load < image/foundationdb/fdb-kubernetes-sidecar-7.3.63-1.tar.gz docker load < image/foundationdb/fdb-kubernetes-monitor-7.3.63.tar.gz docker load < image/dbeaver/cloudbeaver-23.3.5-884-g156f14-117016-release.tar.gz docker load < image/<s3-compatible-oss>/<s3-compatible-oss>-RELEASE.2023-10-25T06-33-25Z.tar.gz docker load < image/<s3-compatible-oss>/mc-RELEASE.2024-11-21T17-21-54Z.tar.gz
Tag and push the images. After loading, tag the images to match the private registry’s URL and then push them. For example:
# Tags the image. docker tag <image_name>:<tag> <your_registry_url>/<repository>/<image_name>:<tag> # Pushes the image. docker push <your_registry_url>/<repository>/<image_name>:<tag>
Repeat this for all loaded images.
Step 2. Install dependencies
The service console relies on a few external services. Install these services before proceeding.
Set up S3-compatible object storage
The service console requires an S3-compatible object storage service for metadata backups. Ensure that the service supports the s3v4 authentication protocol.
For testing purposes, you can set up an S3-compatible object storage service for metadata backup. To avoid potential conflicts, install it in a dedicated namespace, <s3-compatible-oss>-metabak.
Note
<s3-compatible-oss> in the commands and configuration files is a placeholder. Replace it with the actual name of the object storage service you are using.
Install the S3-compatible object storage service using the provided Helm chart.
helm install <s3-compatible-oss>-metabak helm/<s3-compatible-oss>-5.4.0.tgz \ --namespace <s3-compatible-oss>-metabak \ --timeout 10m \ --wait \ --create-namespace \ -f example/<s3-compatible-oss>-values.yaml
Note
Before running, you might need to edit
example/<s3-compatible-oss>-values.yamlto point to the S3-compatible object storage image that you have pushed to the private registry.
Install FoundationDB operator
FoundationDB is used by the service console for its metadata layer.
Action: Install the FoundationDB operator using the Helm chart.
Command:
helm install fdb-operator helm/fdb-operator-0.2.0.tgz \ --namespace fdb \ --timeout 30m \ --wait \ --create-namespace \ -f example/foundationdb-values.yaml
Note
Remember to update
example/foundationdb-values.yamlwith the correct image paths from the registry.
Install CloudBeaver
CloudBeaver provides a web-based SQL client.
Action: Install CloudBeaver using its Helm chart.
Command:
helm install cloudbeaver helm/cloudbeaver-0.1.0.tgz \ --namespace cloudbeaver \ --timeout 10m \ --wait \ --create-namespace \ -f example/cloudbeaver-values.yaml
Note
Update
example/cloudbeaver-values.yamlwith the correct image paths.
Install KubeRay operator
KubeRay provides a Kubernetes operator for managing Ray clusters, which is used by the service console for distributed computing tasks.
Action: Install the KubeRay operator using the provided Helm chart.
Example command:
helm install kuberay-operator helm/kuberay-operator-1.4.2.tgz \ --namespace kuberay \ --timeout 10m \ --wait \ --create-namespace \ -f example/kuberay-values.yaml
Note
Update
example/kuberay-values.yamlwith the correct image paths.
Optional: Install monitoring tools
Prometheus and AlertManager are required to enable monitoring and alert features. The installation package includes a Helm chart and an example values file for the kube-prometheus-stack.
Action: Install the kube-prometheus-stack using the provided Helm chart.
Example command:
helm install prometheus helm/kube-prometheus-stack-72.3.0.tgz \ --namespace monitor \ --timeout 30m \ --wait \ --create-namespace \ -f example/prometheus-values.yaml
Note
Before running, update
example/prometheus-values.yamlwith the correct image paths. For air-gapped or offline environments, set theimage.registryfield for each component to point to your private container registry.
Step 3. Install the service console
With the prerequisites and dependencies in place, you can go ahead and install the main service console application.
A note on the database (production vs. testing)
For a production deployment, an external PostgreSQL-compatible database is required for reliability and data persistence. For testing purposes, the service console uses a simpler embedded database by default. This choice is configured in the next step.
Prepare the configuration file
The configuration for the service console is managed in a YAML file. An example is provided at example/dbaas-values.yaml.
Action: Open
example/dbaas-values.yamland modify it for the environment.Key sections to modify:
Images: Replace all image names with the full paths to the images in the private registry.
OSS configuration: This section configures the connection to the S3 object storage. If the S3-compatible object storage service was installed in the previous step, the default values should work. The endpoint will be
http://<s3-compatible-oss>-metabak.<s3-compatible-oss>-metabak:32000.oss: moscow: vendor: aws internal-region: default public-region: default endpoint: http://<s3-compatible-oss>-metabak.<s3-compatible-oss>-metabak:32000 signatureVersion: s3v4 access-key-id: <s3-compatible-oss> access-key-secret: password
Database (for production): Based on the note above, for a production environment, modify the
datasourcesection with the external PostgreSQL connection string. Otherwise, the default settings can be used for testing.Region and profile: Modify the default region (
moscow) and deployment profiles to match production requirements. To change the region, globally replace all occurrences ofmoscowwith the new name (for example,ru-central1).Note
When you change the region name, you also need to update all other configuration items that refer to the old region name to ensure the settings are consistent.
Enable optional features: By default, features such as FoundationDB management, UnionStore management, TpServer, AI Bot - Data, AI Bot - Doc, and ML Cluster are hidden in the console (except
enable-fdb, which defaults totrueso that at least one metadata storage option is available when creating an account). To enable other features, add afeaturessection underdbaas.regioninexample/dbaas-values.yaml:applicationConfig: dbaas: region: features: enable-fdb: true # FoundationDB management page, FDB-related options, and the FDB metadata option when creating an account (defaults to true) enable-union-store: true # UnionStore management page, UnionStore-related versions, and the UnionStore metadata option when creating an account enable-tp-server: true # TpServer management page enable-data-mind: true # AI data analysis platform enable-doc-mind: true # Document intelligence platform enable-ml-cluster: true # ML Cluster management page
Set a flag to
trueto show the corresponding feature in both the ops and user consoles, or omit it (defaults tofalse, exceptenable-fdb) to keep the feature hidden.Note
TpServer relies on UnionStore for its underlying storage. To use TpServer, you must enable both
enable-tp-serverandenable-union-store; otherwise TpServer cannot be created. This requirement is independent of the account’s metadata backend — accounts using either UnionStore or FoundationDB as their metadata type can host TpServer, as long as UnionStore is enabled at the deployment level.AI Bot - Doc also requires
enable-union-storein addition toenable-doc-mind, but the AI Bot - Doc entry only appears on the detail page of accounts whose metadata_type is UnionStore. For accounts based on FoundationDB, the AI Bot - Doc entry does not appear even if both flags are enabled. To use AI Bot - Doc, you must select UnionStore as the backend service when creating the account.Configure AI Platforms (Optional): If you plan to use AI Bot - Data and AI Bot - Doc features, in addition to enabling them with
enable-data-mindandenable-doc-mindin the previous step, uncomment and update the relevant sections inexample/dbaas-values.yaml. For details, see SynxDB AI Bot - Doc and SynxDB AI Bot - Data.Note
In the example below,
<data-ai-platform>and<doc-ai-platform>are placeholders for the actual configuration section names. Replace them with the values provided in your deployment package.## Below are example configs for AI platform components ## Update to your own values according to your environment # <data-ai-platform>: # llms: # qwen3_8b: # show-name: "Qwen3-8B" # base-url: "http://10.14.10.1:30000/v1" # api-key: "for_openai" # model: "qwen/qwen3-8b" # <doc-ai-platform>: # pdf-parser-endpoint: "http://10.14.10.1:6676/parse_pdf/" # elastic-search-url: "http://10.13.10.191:9200/" # model-services: # text-embedding: # model-name: "jina-embeddings-v2-base-zh" # url: "http://10.14.10.1:8000/embedding/v1/embeddings" # api-key: "" # provider: "openai" # batch-size: 2 # multi-modal-embedding: # model-name: "bge-vl-large" # url: "http://10.14.10.1:8000/vl_embedding/v1/embeddings" # api-key: "" # provider: "openai" # llms: # default: # model-name: "zhipu/glm4-9b-chat" # url: "http://10.14.10.1:30001/v1/chat/completions" # api-key: "for_openai" # provider: "openai" # enable-thinking: "false" # max-tokens: 8192 # qwen: # model-name: "qwen/qwen3-8b" # url: "http://10.14.10.1:30000/v1/chat/completions" # api-key: "for_openai" # provider: "openai" # enable-thinking: "false" # max-tokens: 8192
Run the installation command
Once the configuration file is ready, use Helm to deploy the service console.
Action: Run the
helm installcommand.Command:
helm install dbaas-integration helm/dbaas-integration-1.0-RELEASE.tgz \ --namespace dbaas \ --timeout 10m \ --wait \ --create-namespace \ -f example/dbaas-values.yaml
This command creates a new namespace called dbaas and deploys all the necessary components. The --wait flag ensures that the command only finishes after the deployment is successful.
Step 4. Access the service console
By default, the service console is not exposed outside the Kubernetes cluster.
Access for testing: port forwarding
For testing, the easiest way to access it is by using port forwarding.
Set up port forwarding. Run these commands in the terminal. They will find the correct pod and forward its port to the local machine.
# Gets the pod name and saves it to a variable. export POD_NAME=$(kubectl get pods --namespace dbaas -l "app.kubernetes.io/name=dbaas-integration,app.kubernetes.io/instance=dbaas-integration" -o jsonpath="{.items[0].metadata.name}") # Gets the container port. export CONTAINER_PORT=$(kubectl get pod --namespace dbaas $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}") # Starts port forwarding. echo "Visit http://127.0.0.1:8080 to use your application" kubectl --namespace dbaas port-forward $POD_NAME 8080:$CONTAINER_PORT
Open the web console. Keep the port-forwarding command running. You can now access the consoles in a web browser:
User console:
http://localhost:8080/Ops console:
http://localhost:8080/ops/
Default credentials:
Username:
adminPassword:
admin
Access for production: ingress or reverse proxy
For a production environment, set up a Kubernetes Ingress or an HTTP reverse proxy in front of the service console. For security reasons, configure it in the following way:
Enable HTTPS: Expose the service via the HTTPS protocol and redirect all HTTP connections to HTTPS.
Restrict console access: Only expose the user console (
/) to customers. The ops console (/ops/) should only be exposed to the internal network.Redirect for customers: For customers, configure a redirect from the ops console path to the user console path.
The SynxDB Cloud service console is now installed and accessible.