Complete Guide

Data Sync Documentation

Everything you need to know about implementing, configuring, and optimizing your data synchronization infrastructure.

Quick Start

Getting Started

Get up and running with Data Sync in minutes. Follow this guide to set up your first data synchronization pipeline.

1

System Requirements

Ensure your environment meets the following requirements:

  • Operating System: Linux (RHEL 7+, Ubuntu 18.04+) or Windows Server 2016+
  • Memory: Minimum 8GB RAM (16GB recommended)
  • Storage: 50GB available disk space
  • Network: Stable internet connection with access to source/target databases
2

Download & Install

Download the installer from your customer portal:

Linux Installation
wget https://downloads.datasourcesolutions.ai/datasync-latest.tar.gz
tar -xzf datasync-latest.tar.gz
cd datasync && ./install.sh
Windows Installation
Download DataSync-Setup.exe
Run as Administrator
Follow the installation wizard
3

Configure Connections

Set up your source and target database connections:

  • Access the web console at http://localhost:8080
  • Navigate to Connections → Add New
  • Enter database credentials and test connection
  • Repeat for all source and target databases
4

Create Your First Sync Job

Set up a data synchronization pipeline:

  • 1
    Go to Jobs → Create New Job
  • 2
    Select source and target connections
  • 3
    Choose tables/schemas to synchronize
  • 4
    Configure sync options (real-time, batch, scheduled)
  • 5
    Click Start Sync and monitor progress

Need Help Getting Started?

Our support team is ready to help you get up and running. Schedule a personalized onboarding session.

Contact Support
Setup Guide

Installation Guide

Detailed installation instructions for different environments and deployment scenarios.

On-Premises

Install on your own infrastructure for maximum control and security.

  • Full data control
  • Custom configurations
  • No internet required
View Instructions

Cloud Deployment

Deploy on AWS, Azure, or GCP for scalability and flexibility.

  • Auto-scaling
  • High availability
  • Managed backups
View Instructions

Docker Container

Quick deployment using containerization for any environment.

  • Fast setup
  • Consistent environment
  • Easy updates
View Instructions

On-Premises Installation

Prerequisites

  • Java Runtime Environment (JRE) 11 or higher
  • Database client drivers for your source/target databases
  • Network access to all source and target databases

Installation Steps

# Download the installer
wget https://downloads.datasourcesolutions.ai/datasync-latest.tar.gz
tar -xzf datasync-latest.tar.gz
# Run the installation script
cd datasync
sudo ./install.sh
# Configure the service
sudo systemctl enable datasync
sudo systemctl start datasync
# Verify installation
sudo systemctl status datasync

Cloud Deployment

Amazon AWS

Deploy using EC2, RDS, and S3 for a complete solution.

View AWS Guide →

Microsoft Azure

Use Azure VMs, SQL Database, and Blob Storage.

View Azure Guide →

Google Cloud

Deploy with Compute Engine and Cloud SQL.

View GCP Guide →

Docker Container Installation

Quick Start with Docker

# Pull the latest image
docker pull datasourcesolutions/datasync:latest
# Run the container
docker run -d \
--name datasync \
-p 8080:8080 \
-v /data/datasync:/data \
datasourcesolutions/datasync:latest
# Check container status
docker ps | grep datasync
Pro Tip

Use Docker Compose for production deployments with multiple containers, load balancing, and easy configuration management.

Developer Resources

API Reference

Complete API documentation for programmatic control of Data Sync operations.

RESTful API

Standard HTTP methods for all operations

API Authentication

Secure token-based authentication

JSON Format

All requests and responses in JSON

Authentication

All API requests require authentication using an API key. Include your key in the Authorization header:

POST /api/v1/authenticate
Headers:
Authorization: Bearer YOUR_API_KEY
Content-Type: application/json

Note: API keys can be generated in the web console under Settings → API Keys. Keep your keys secure and never share them publicly.

GET /api/v1/connections

List All Connections

Retrieve a list of all configured database connections.

Response Example:
{ "connections": [ { "id": "conn_123", "name": "Production Oracle DB", "type": "oracle", "host": "prod-db.example.com", "port": 1521, "status": "active" } ] }
POST /api/v1/jobs

Create Sync Job

Create a new data synchronization job between source and target.

Request Body:
{ "name": "Orders Sync", "source_connection_id": "conn_123", "target_connection_id": "conn_456", "tables": ["orders", "order_items"], "sync_mode": "realtime", "compression": "high" }
Response:
{ "job_id": "job_789", "status": "created", "message": "Job created successfully" }
GET /api/v1/jobs/{job_id}/status

Get Job Status

Monitor the status and progress of a synchronization job.

Response Example:
{ "job_id": "job_789", "status": "running", "progress": 67.5, "rows_synced": 1350000, "compression_ratio": 8.2, "start_time": "2026-02-02T10:30:00Z", "estimated_completion": "2026-02-02T11:45:00Z" }
PUT /api/v1/jobs/{job_id}/stop

Stop Job

Gracefully stop a running synchronization job.

Response:
{ "job_id": "job_789", "status": "stopped", "message": "Job stopped successfully" }

Official SDKs

Download our official client libraries to integrate Data Sync into your applications:

Support

Troubleshooting Guide

Solutions to common issues and best practices for maintaining optimal performance.

Common Issues & Solutions

Best Practices

Performance Optimization

  • Use incremental sync mode when possible
  • Schedule large syncs during off-peak hours
  • Monitor compression ratios and adjust levels
  • Regular cleanup of old log files

Security & Maintenance

  • Rotate API keys every 90 days
  • Enable SSL/TLS for all connections
  • Keep software updated to latest version
  • Regular backup of configuration files

Still Need Help?

Our technical support team is available 24/7 to help resolve any issues you encounter.