坐席多开

Google Sheets to Snowflake: 2 Easy Methods
Is your data in Google Sheets becoming too large for on-demand analytics? Are you struggling to combine data from multiple Google Sheets into a single source of truth for reports and analytics? If that’s the case, then your business may be ready for a move to a mature data platform like Snowflake. This post covers two approaches for migrating your data from Google Sheets to Snowflake. Snowflake Google Sheets integration facilitates data accessibility and collaboration by allowing information to be transferred and analyzed across the two platforms with ease. The following are the methods you can use to connect Google Sheets to Snowflake in a seamless fashion:
Method 1: Using LIKE.TG Data to Connect Google Sheets to Snowflake
LIKE.TG is the only real-time ELT No-code data pipeline platform that cost-effectively automates data pipelines that are flexible to your needs. Withintegration with 150+ Data Sources(40+ free sources), we help you not only export data from sources load data to the destinations but also transform enrich your data, make it analysis-ready.
Sign up here for a 14-Day Free Trial!
LIKE.TG provides an easy-to-use data integration platform that works by building an automated pipeline in just two interactive steps:
Step 1: Configure Google Sheets as a source, by entering the Pipeline Name and the spreadsheet you wish to replicate.
Perform the following steps to configure Google Sheets as a Source in your Pipeline:
Click PIPELINES in the Navigation Bar.
Click + CREATE in the Pipelines List View.
In the Select Source Type page, select Google Sheets.
In the Configure your Google Sheets account page, to select the authentication method for connecting to Google Sheets, do one of the following:
To connect with a User Account, do one of the following:
Select a previously configured account and click CONTINUE.
Click + ADD GOOGLE SHEETS ACCOUNT and perform the following steps to configure an account:
Select the Google account associated with your Google Sheets data.
Click Allow to authorize LIKE.TG to access the data.
To connect with a Service Account, do one of the following:
Select a previously configured account and click CONTINUE.
Click the attach icon () to upload the Service Account Key and click CONFIGURE GOOGLE SHEETS ACCOUNT.Note: LIKE.TG supports only JSON format for the key file.
In the Configure your Google Sheets Source page, specify the Pipeline Name, Sheets, Custom Header Row.
Click TEST CONTINUE.
Proceed to configuring the data ingestion and setting up the Destination.
Step 2: Create and Configure your Snowflake Warehouse
LIKE.TG provides you with a ready-to-use script to configure the Snowflake warehouse you intend to use as the Destination.
Follow these steps to run the script:
Log in to your Snowflake account.
In the top right corner of the Worksheets tab, click the + icon to create a new worksheet.
Paste the script in the worksheet. The script creates a new role for LIKE.TG in your Snowflake Destination. Keeping your privacy in mind, the script grants only the bare minimum permissions required by LIKE.TG to load the data in your Destination.
Replace the sample values provided in lines 2-7 of the script with your own to create your warehouse. These are the credentials that you will be using to connect your warehouse to LIKE.TG . You can specify a new warehouse, role, and or database name to create these now or use pre-existing ones to load data into.
Press CMD + A (Mac) or CTRL + A (Windows) inside the worksheet area to select the script.
Press CMD+return (Mac) or CTRL + Enter (Windows) to run the script.
Once the script runs successfully, you can use the credentials from lines 2-7 of the script to connect your Snowflake warehouse to LIKE.TG .
Step 3: Complete Google Sheets to Snowflake migration by providing your destination name, account name, region of your account, database username and password, database and schema name, and the Data Warehouse name.
And LIKE.TG automatically takes care of the rest. It’s just that simple.You are now ready to start migrating data from Google Sheets to Snowflake in a hassle-free manner! You can also integrate data from numerous other free data sources like Google Sheets, Zendesk, etc. to the desired destination of your choice such as Snowflake in a jiff.
LIKE.TG is also much faster, thanks to its highly optimized features and architecture. Some of the additional features you can also enjoy with LIKE.TG are:
Transformations– LIKE.TG provides preload transformations through Python code. It also allows you to run transformation code for each event in the pipelines you set up. You need to edit the event object’s properties received in the transform method as a parameter to carry out the transformation. LIKE.TG also offers drag-and-drop transformations like Date and Control Functions, JSON, and Event Manipulation to name a few. These can be configured and tested before putting them to use.
Monitoring and Data Management – LIKE.TG automatically manages your data loads and ensures you always have up-to-date and accurate data in Snowflake.
Automatic Change Data Capture – LIKE.TG performs incremental data loads automatically through a number of in-built Change Data Capture mechanisms. This means, as and when data on Google Sheets changes, they are loaded onto Snowflake in real time.
It just took us 2 weeks to completely transform from spreadsheets to a modern data stack. Thanks to LIKE.TG that helped us make this transition so smooth and quick. Now all the stakeholders of our management, sales, and marketing team can easily build and access their reports in just a few clicks.
– Matthew Larner, Managing Director, ClickSend
Method 2: Using Migration Scripts to Connect Google Sheets to Snowflake
To migrate your data from Google Sheets to Snowflake, you may opt for a custom-built data migration script to get the job done.We will demonstrate this process in the next paragraphs. To proceed, you will need the following requirements.
Step 1: Setting Up Google Sheets API Access for Google Sheets
As a first step, you would need to set up Google Sheets API access for the affected Google Sheets. Start by doing the following:
1. Log in to the Google account that owns the Google Sheets
2. Point your browser to the Google Developer Console (copy and paste the following in your browser: console.developers.google.com)
3. After the console loads create a project by clicking the “Projects” dropdown and then clicking “New Project“
4. Give your project a name and click “Create“
5. After that, click “Enable APIs and Services“
6. Search for “Google Sheets API” in the search bar that appears and select it
7. Click “Enable” to enable the Google Sheets API
8. Click on the “Credentials” option on the left navbar in the view that appears, then click “Create Credentials“, and finally select “Service Account“
9. Provide a name for your service account. You will notice it generates an email format for the Service Account ID. In my example in the screenshot below, it is “[email protected]”. Take note of this value. The token “migrate-268012” is the name of the project I created while “gsheets-migration” is the name of my service account. In your case, these would be your own supplied values.
10. Click “Create” and fill out the remaining optional fields. Then click “Continue“
11. In the view that appears, click “Create Key“, select the “JSON” option and click “Create” to download your key file (credentials). Please store it in a safe place. We will use this later when setting up our migration environment.
12. Finally, click “Done“.
At this point, all that remains for the Google Sheets setup is the sharing of all the Google Sheets you wish to migrate with the email-format Service Account ID mentioned in step 9 above.
Note: You can copy your Service Account ID from the “client-email” field of the credential file you downloaded.
For this demonstration, I will be migrating a sheet called “data-uplink-logs” shown in the screenshot below. I will now share it with my Service Account ID:Click “Share” on the Google sheet, paste in your Service Account ID, and click “Send“. Repeat this process for all sheets you want to migrate. Ignore any “mail delivery subsystem failure” notifications you receive while sharing the sheets, as your Service Account ID is not designed to operate as a normal email address.
Step 2: Configuring Target Database in Snowflake
We’re now ready to get started on the Snowflake side of the configuration process, which is simpler.
To begin, create a Snowflake account. Creating an account furnishes you with all the credentials you will need to access Snowflake from your migration script.
Specifically:
After creating your account, you will be redirected to your Cloud Console which will open up in your browser
During the account creation process, you would have specified your chosen username and password. You would have also selected your preferred AWS region, which will be part of your account.
Your Snowflake account is of the form <Your Account ID>.<AWS Region> and your Snowflake cloud console URL will be of the form https://<Your Account ID>.<AWS Region>.snowflakecomputing.com/
Prepare and store a JSON file with these credentials. It will have the following layout: { "user": "<Your Username>", "account": "<Your Account ID>.<AWS Region>", "password": "<Your Password>" }
After storing the JSON file, take some time to create your target environment on Snowflake using the intuitive User Interface.
You are initially assigned a Data Warehouse called COMPUTE_WH so you can go ahead and create a Database and tables in it.
After providing a valid name for your database and clicking “Finish“, click the “Grant Privileges” button which will show the form in the screenshot below.
Select the “Modify” privilege and assign it to your schema name (which is “PUBLIC” by default). Click “Grant“. Click “Cancel” if necessary, after that, to return the main view.
The next step is to add a table to your newly created database. You do this by clicking the database name on the left display and then clicking on the “Create Table” button. This will pop up the form below for you to design your table:
After designing your table, click “Finish” and then click on your table name to verify that your table was created as desired:
Finally, open up a Worksheet pane, which will allow you to run queries on your table. Do this by clicking on the “Worksheets” icon, and then clicking on the “+” tab.
You can now select your database from the left pane to start running queries.
We will run queries from this view to verify that our data migration process is correctly writing our data from the Google sheet to this table.
We are now ready to move on to the next step.
Step 3: Preparing a Migration Environment on Linux Server
In this step, we will configure a migration environment on our Linux server.
SSH into your Linux instance. I am using a remote AWS EC2 instance running Ubuntu, so my SSH command is of the form ssh -i <keyfile>.pem ubuntu@<server_public_IP>
Once in your instance, run sudo apt-get update to update the environment
Next, create a folder for the migration project and enter it sudo mkdir migration-test; cd migration-test
It’s now time to clone the migration script we created for this post: sudo git clone https://github.com/cmdimkpa/Google-Sheets-to-Snowflake-Data-Migration.git
Enter the project directory and view contents with the command:
cd Google-Sheets-to-Snowflake-Data-Migration; ls
This reveals the following files:
googlesheets.json: copy your saved Google Sheets API credentials into this file.
snowflake.json: likewise, copy your saved Snowflake credentials into this file.
migrate.py: this is the migration script.
Using the Migration Script
Before using the migration script (a Python script), we must ensure the required libraries for both Google Sheets and Snowflake are available in the migration environment. Python itself should already be installed – this is usually the case for Linux servers, but check and ensure it is installed before proceeding.
To install the required packages, run the following commands:
sudo apt-get install -y libssl-dev libffi-dev
pip install --upgrade snowflake-connector-python
pip install gspread oauth2client PyOpenSSL
At this point, we are ready to run the migration script.
The required command is of the form:
sudo python migrate.py <Source Google Sheet Name>
<Comma-separated list of columns in the Google Sheet to Copy>
<Number of rows to copy each run> <Snowflake target Data Warehouse>
<Snowflake target Database> <Snowflake target Table> <Snowflake target table Schema>
<Comma-separated list of Snowflake target table fields> <Snowflake account role>
For our example process, the command becomes:
sudo python migrate.py data-uplink-logs A,B,C,D 24
COMPUTE_WH TEST_GSHEETS_MIGRATION GSHEETS_MIGRATION PUBLIC CLIENT_ID,NETWORK_TYPE,BYTES,UNIX_TIMESTAMP SYSADMIN
To migrate 24 rows of incremental data (each run) from our test Google Sheet data-uplink-logs to our target Snowflake environment, we simply run the command above. The following is a screenshot of what follows:
The reason we migrate only 24 rows at a time is to beat the rate limit for the free tier of the Google Sheets API. Depending on your plan, you may not have this restriction.
Step 4: Testing the Migration Process
To test that the migration ran successfully, we simply go to our Snowflake Worksheet which we opened earlier, and run the following SQL query:
SELECT * FROM TEST_GSHEETS_MIGRATION.PUBLIC.GSHEETS_MIGRATION
Indeed, the data is there. So the data migration effort was successful.
Step 5: Run CRON Jobs
As a final step, run cron jobs as required to have the migrations occur on a schedule. We cannot cover the creation of cron jobs here, as it is beyond the scope of this post.
This concludes the first approach! I hope you were as excited reading that as I was, writing it. It’s been an interesting journey, now let’s review the drawbacks of this approach.
Limitations of using Migration Scripts to Connect Google Sheets to Snowflake
The migration script approach to connect google sheets to Snowflake works well, but has the following drawbacks:
This approach would need to pull out a few engineers to set up and test this infrastructure. Once built, you would also need to have a dedicated engineering team that can constantly monitor the infra and provide immediate support if and when something breaks.
Aside from the setup process which can be intricate depending on experience, this approach creates new requirements such as:
The need to monitor the logs and ensure the uptime of the migration processes.
Fine-tuning of the cron jobs to ensure optimal data transmission with respect to the data inflow rates of the different Google sheets, any Google Sheet API rate limits, and the latency requirements of the reporting or analytics processes running on Snowflake or elsewhere.
Download the Cheatsheet on How to Set Up ETL to Snowflake
Learn the best practices and considerations for setting up high-performance ETL to Snowflake
Method 3: Connect Google Sheets to Snowflake Using Python
In this method, you will use Python to load data from Google Sheets to Snowflake. To do this, you will have to enable public access to your Google Sheets. You can do this by going to File>> Share >> Publish to web.
After publishing to web, you will see a link in the format of
https://docs.google.com/spreadsheets/d/{your_google_sheets_id}/edit#gid=0
You would need to install certain libraries in order to read this data, transform it into a dataframe, and write to Snowflake. Snowflake.connector and Pyarrow are the other two, while Pandas is the first.
Installing pandas may be done with pip install pandas. The command pip install snowflake-connector-python may also be used to install Snowflake connector. The command pip install pyarrow may be used to install Pyarrow.
You may use the following code to read the data from your Google Sheets.
import pandas as pd
data=pd.read_csv(f'https://docs.google.com/spreadsheets/d/{your_google_sheets_id}/pub?output=csv')
In the code above, you will replace {your_google_sheets_id} with the id from your spreadsheet. You can preview the data by running the command data.head()
You can also check out the number of columns and records by running data.shape
Setting up Snowflake login credentials
You will need to set up a data warehouse, database, schema, and table on your Snowflake account.
Data loading in Snowflake
You would need to utilize the Snowflake connection that was previously installed in Python in order to import the data into Snowflake.
When you run write_to_snowflake(data), you will ingest all the data into your Snowflake data warehouse.
Disadvantages Of Using ETL Scripts
There are a variety of challenges and drawbacks when integrating data from sources like Google Sheets to Snowflake using ETL (Extract, Transform, Load) procedures, especially for businesses with little funding or experience.
Price is the primary factor to be considered. Implementation and upkeep of the ETL technique can be expensive. It demands investments in personnel with the necessary skills to efficiently design, develop, and oversee these processes in addition to technology.
Complexity is an additional problem. ETL processes may be intricate and challenging to configure properly. Companies without the necessary expertise may find it difficult to properly manage data conversions and interfaces.
ETL processes can have limitations on scalability and flexibility. They might not be able to handle unstructured data well or provide real-time data streams, which makes them inappropriate.
Conclusion
This blog talks about the two different methods you can use to connect Google Sheets Snowflake integration in a seamless fashion: using migration scripts and with the help of a third-party tool, LIKE.TG .
Visit our Website to Explore LIKE.TG
Extracting complex data from a diverse set of data sources can be a challenging task and this is where LIKE.TG saves the day! LIKE.TG offers a faster way to move data from Databases or SaaS applications such as MongoDB into your Data Warehouse like Snowflake to be visualized in a BI tool.LIKE.TG is fully automated and hence does not require you to code.
As we have seen, LIKE.TG greatly simplifies the process of migrating data from your Google Sheets to Snowflake or indeed any other source and destination.Sign Up for your 14-day free trial and experience stress-free data migration today! You can also have a look at the unbeatableLIKE.TG Pricingthat will help you choose the right plan for your business needs.

Apache Kafka to BigQuery: 3 Easy Methods
Various organizations rely on the open-source streaming platform Kafka to build real-time data applications and pipelines. These organizations are also looking to modernize their IT landscape and adopt BigQuery to meet their growing analytics needs.By establishing a connection from Kafka to BigQuery, these organizations can quickly activate and analyze data-derived insights as they happen, as opposed to waiting for a batch process to be completed.
Methods to Set up Kafka to BigQuery Connection
You can easily set up your Kafka to BigQuery connection using the following 2 methods.
Method 1: Using LIKE.TG Data to Move Data from Kafka to BigQuery
LIKE.TG is the only real-time ELT No-code data pipeline platform that cost-effectively automates data pipelines that are flexible to your needs. Withintegration with 150+ Data Sources(40+ free sources), we help you not only export data from sources load data to the destinations but also transform enrich your data, make it analysis-ready with zero data loss.
Sign up here for a 14-day free trial
LIKE.TG takes care of all your data preprocessing needs required to set up Kafka to BigQuery Integration and lets you focus on key business activities. LIKE.TG provides aone-stop solutionfor all Kafka use cases and collects the data stored in their Topics Clusters. Moreover, Since Google BigQuery has built-in support for nested and repeated columns, LIKE.TG neither splits nor compresses theJSONdata.
Here are the steps to move data from Kafka to BigQuery using LIKE.TG :
Authenticate Kafka Source: Configure Kafka as the source for your LIKE.TG Pipeline by specifying Broker and Topic Names.
Check out our documentation to know more about the connector
Configure BigQuery Destination: Configure the Google BigQuery Data Warehouse account, where the data needs to be streamed, as your destination for the LIKE.TG Pipeline.
Read more on our BigQuery connector here.
With continuous Real-Time data movement, LIKE.TG allows you to combine Kafka data along with your other data sources and seamlessly load it to BigQuery with a no-code, easy-to-setup interface. LIKE.TG Data also offers live support, and easy transformations, and has been built to keep up with your needs as your operation scales up. Try our 14-day full-feature access free trial!
Key features of LIKE.TG are:
Data Transformation:It provides a simple interface to perfect, modify, and enrich the data you want to transfer.
Schema Management:LIKE.TG can automatically detect the schema of the incoming data and maps it to the destination schema.
Incremental Data Load: LIKE.TG allows the transfer of data that has been modified in real-time. This ensures efficient utilization of bandwidth on both ends.
Get Started with LIKE.TG for Free
Method 2: Using Custom Code to Move Data from Kafka to BigQuery
The steps to build a custom-coded data pipeline between Apache Kafka and BigQuery are divided into 2, namely:
Step 1: Streaming Data from Kafka
Step 2: Ingesting Data into BigQuery
Step 1: Streaming Data from Kafka
There are various methods and open-source tools which can be employed to stream data from Kafka. This blog covers the following methods:
Streaming with Kafka Connect
Streaming with Apache Beam
Streaming with Kafka Connect
Kafka Connect is an open-source component of Kafka. It is designed by Confluent to connect Kafka with external systems such as databases, key-value stores, file systems et al.
It allows users to stream data from Kafka straight into BigQuery with sub-minute latency through its underlying framework. Kafka connect gives users the incentive of making use of existing connector implementations so you don’t need to draw up new connections when moving new data. Kafka Connect provides a ‘SINK’ connector that continuously consumes data from consumed Kafka topics and streams to external storage location in seconds. It also has a ‘SOURCE’ connector that ingests databases as a whole and streams table updates to Kafka topics.
There is no inbuilt connector for Google BigQuery in Kafka Connect. Hence, you will need to use third-party tools such as Wepay. When making use of this tool, Google BigQuery tables can be auto-generated from the AVRO schema seamlessly. The connector also aids in dealing with schema updates. As Google BigQuery streaming is backward compatible, it enables users to easily add new fields with default values, and steaming will continue uninterrupted.
Using Kafka Connect, the data can be streamed and ingested into Google BigQuery in real-time. This, in turn, gives users the advantage to carry out analytics on the fly.
Limitations of Streaming with Kafka Connect
In this method, data is partitioned only by the processing time.
Streaming Data with Apache Beam
Apache Beam is an open-source unified programming model that implements batch and stream data processing jobs that run on a single engine. The Apache Beam model helps abstract all the complexity of parallel data processing. This allows you to focus on what is required of your Job not how the Job gets executed.
One of the major downsides of streaming with Kafka Connect is that it can only ingest data by the processing time which can lead to data arriving in the wrong partition. Apache Beam resolves this issue as it supports both batch and stream data processing.
Apache Beam has a supported distributed processing backend called Cloud Data Flow that executes your code as a cloud job making it fully managed and auto-scaled. The number of workers is fully elastic as it changes according to your current workload and the cost of execution is altered concurrently.
Limitations of Streaming Data with Apache Beam
Apache Beam incurs an extra cost for running managed workers.
Apache Beam is not a part of the Kafka ecosystem.
LIKE.TG supportsboth Batch Load Streaming Load for the Kafka to BigQuery use case and providesa no-code, fully-managed minimal maintenancesolutionfor this use case.
Step 2: Ingesting Data to BigQuery
Before you start streaming in from Kafka to BigQuery, you need to check the following boxes:
Make sure you have the Write access to the dataset that contains your destination table to prevent subsequent errors when streaming.
Check the quota policy for streaming data on BigQuery to ensure you are not in violation of any of the policies.
Ensure that billing is enabled for your GCP (Google Cloud Platform) account. This is because streaming is not available for the free tier of GCP, hence if you want to stream data into Google BigQuery you have to make use of the paid tier.
Now, let us discuss the methods to ingest our streamed data from Kafka to BigQuery. The following approaches are covered in this post:
Streaming with BigQuery API
Batch Loading into Google Cloud Storage (GCS)
Streaming with BigQuery API
The Google BigQuery API is a data platform for users to manage, create, share and query data. It supports streaming data directly into Google BigQuery with a quota of up 100K rows per project.
Real-time data streaming on Google BigQuery API costs $0.05 per GB. To make use of Google BigQuery API, it has to be enabled on your account. To enable the API:
Ensure that you have a project created.
In the GCP Console, click on the hamburger menu and select APIs and services and click on the dashboard.
In the API and services window, select enable API and Services.
A search query will pop up. Enter Google BigQuery. Two search results of Google BigQuery Data Transfer and Google BigQuery API will pop up. Select both of them and enable them.
With Google BigQuery API enabled, the next step would be to move the data from Apache Kafka through a stream processing framework like Kafka streams into Google BigQuery. Kafka Streams is an open-source library for building scalable streaming applications on top of Apache Kafka. Kafka Streams allow users to execute their code as a regular Java application. The pipeline flows from an ingested Kafka topic and some filtered rows through streams from Kafka to BigQuery. It supports both processing time and event time partitioning models.
Limitations of Streaming with BigQuery API
Though streaming with the Google BigQuery API gives complete control over your records you have to design a robust system to enable it to scale successfully.
You have to handle all streaming errors and downsides independently.
Batch Loading Into Google Cloud Storage (GCS)
To use this technique you could make use of Secor. Secor is a tool designed to deliver data from Apache Kafka into object storage systems such as GCS and Amazon S3. From GCS we then load the data into Google BigQuery using either a load job, manually via the BigQuery UI, or through Google BigQuery’s command line Software Development Kit (SDK).
Limitations of Batch Loading in GCS
Secor lacks support for AVRO input format, this forces you to always use a JSON-based input format.
This is a two-step process that can lead to latency issues.
This technique does not stream data in real-time. This becomes a blocker in real-time analysis for your business.
This technique requires a lot of maintenance to keep up with new Kafka topics and fields. To update these changes you would need to put in the effort to manually update the schema in the Google BigQuery table.
Method 3: Using the Kafka to BigQuery Connector to Move Data from Apache Kafka to BigQuery
The Kafka BigQuery connector is handy to stream data into BigQuery tables. When streaming data from Apache Kafka topics with registered schemas, the sink connector creates BigQuery tables with appropriate BigQuery table schema, which is based on the Kafka scheme information for the topic.
Here are some limitations associated with the Kafka Connect BigQuery Sink Connector:
No support for schemas with floating fields with NaN or +Infinity values.
No support for schemas with recursion.
If you configure the connector with upsertEnabled or deleteEnabled, it doesn’t support Single Message Transformations modifying the topic name.
Need for Kafka to BigQuery Migration
While you can use the Kafka platform to build real-time data pipelines and applications, you can use BigQuery to modernize your IT landscape, while meeting your growing analytics needs.
Connecting Kafka to BigQuery allows real-time data processing for analyzing and acting on data as it is generated. This enables you to obtain valuable insights and faster decision-making. Common use case for this is in the finance industry, where it is possible to identify fraudulent activities with real-time data processing.
Yet another need for migrating Kafka to BigQuery is scalability. As both platforms are highly scalable, you can handle large data volumes without any performance issues. Scaling your data processing systems for growing data volumes can be done with ease since Kafka can handle millions of messages per second while BigQuery can handle petabytes of data.
Another need for Kafka connect BigQuery is its cost-effectiveness factor. Kafka being an open-source platform won’t include any licensing costs; the pay-as-you-go pricing model of BigQuery means you only need to pay for the data processed. Integrating both platforms requires you to only pay for the data that is processed and analyzed, helping reduce overall costs.
Conclusion
This article provided you with a step-by-step guide on how you can set up Kafka to BigQuery connection using Custom Script or using LIKE.TG . However, there are certain limitations associated with the Custom Script method. You will need to implement it manually, which will consume your time resources and is error-prone. Moreover, you need working knowledge of the backend tools to successfully implement the in-house Data transfer mechanism.
LIKE.TG Data provides an Automated No-code Data Pipeline that empowers you to overcome the above-mentioned limitations. LIKE.TG caters to 150+ data sources (including 40+ free sources) and can seamlessly transfer your data from Kafka to BigQuery within minutes. LIKE.TG ’s Data Pipeline enriches your data and manages the transfer process in a fully automated and secure manner without having to write any code. It will make your life easier and make data migration hassle-free.
Learn more about LIKE.TG
Want to take LIKE.TG for a spin? Signup for a 14-day free trial and experience the feature-rich LIKE.TG suite firsthand.
Share your understanding of the Kafka to BigQuery Connection in the comments below!

Connect Microsoft SQL Server to BigQuery in 2 Easy Methods
var source_destination_email_banner = 'true';
Are you looking to perform a detailed analysis of your data without having to disturb the production setup on SQL Server? In that case, moving data from SQL Server to a robust data warehouse like Google BigQuery is the right direction to take. This article aims to guide you with steps to move data from Microsoft SQL Server to BigQuery, shed light on the common challenges, and assist you in navigating through them. You will explore two popular methods that you can utilize to set up Microsoft SQL Server to BigQuery migration.
Methods to Set Up Microsoft SQL Server to BigQuery Integration
Majorly, there are two ways to migrate your data from Microsoft SQL to BigQuery.
Methods to Set Up Microsoft SQL Server to BigQuery Integration
Method 1: Manual ETL Process to Set Up Microsoft SQL Server to BigQuery Integration
This method involves the use of SQL Server Management Studio (SMSS) for setting up the integrations. Moreover, it requires you to convert the data into CSV format and then replicate the data. It requires a lot of engineering bandwidth and knowledge of SQL queries.
Method 2: Using LIKE.TG Data to Set Up Microsoft SQL Server to BigQuery Integration
Integrate your data effortlessly from Microsoft SQL Server to BigQuery in just two easy steps using LIKE.TG Data. We take care of your data while you focus on more important things to boost your business.
Get Started with LIKE.TG for Free
Method 1: Using LIKE.TG Data to Set Up Microsoft SQL Server to BigQuery Integration
LIKE.TG is a no-code fully managed data pipeline platform that completely automates the process of not only loading data from your desired source but also enriching the data and transforming it into an analysis-ready form. Its fault-tolerant architecture ensures that the data is handled in a secure, consistent manner with zero data loss.
Sign up here for a 14-Day Free Trial!
The steps to load data from Microsoft SQL Server to BigQuery using LIKE.TG Data are as follows:
Connect your Microsoft SQL Server account to LIKE.TG ’s platform. LIKE.TG has an in-built Microsoft SQL Server Integration that connects to your account within minutes.
Click here to read more about using SQL Server as a Source connector with LIKE.TG .
Select Google BigQuery as your destination and start moving your data.
Click here to read more about using BigQuery as a destination connector with LIKE.TG .
With this, you have successfully set up Microsoft SQL Server to BigQuery Integration using LIKE.TG Data.
Here are more reasons to try LIKE.TG :
Data Transformation: It provides a simple interface to perfect, modify, and enrich the data you want to transfer.
Schema Management: LIKE.TG can automatically detect the schema of the incoming data and maps it to the destination schema.
Incremental Data Load: LIKE.TG allows you to migrate SQL Server to BigQuery data that has been modified in real-time. This ensures efficient utilization of bandwidth on both ends.
Integrate you data seamlessly
[email protected]">
No credit card required
Method 2: Manual ETL Process to Set Up Microsoft SQL Server to BigQuery Integration
The steps to execute the custom code are as follows:
Step 1: Export the Data from SQL Server using SQL Server Management Studio (SSMS)
Step 2: Upload to Google Cloud Storage
Step 3: Upload to BigQuery from Google Cloud Storage (GCS)
Step 4: Update the Target Table in BigQuery
Step 1: Export the Data from SQL Server using SQL Server Management Studio (SSMS)
SQL Server Management Studio(SSMS) is a free tool built by Microsoft to enable a coordinated environment for managing any SQL infrastructure. SSMS is used to query, design, and manage your databases from your local machine. We are going to be using the SSMS to extract our data in Comma Separated Value(CSV) format in the steps below.
Install SSMS if you don’t have it on your local machine. You can install it here.
Open SSMS and connect to a Structured Query Language (SQL) instance. From the object explorer window, select a database and right-click on the Tasks sub-menu, and choose the Export data option.
The welcome page of the Server Import and Export Wizard will be opened. Click the Next icon to proceed to export the required data.
You will see a window to choose a data source. Select your preferred data source.
In the Server name dropdown list, select a SQL Server instance.
In the Authentication section select authentication for the data source connection. Next, from the Database drop-down box, select a database from which data will be copied. Once you have filled the drop-down list select ‘Next‘.
The next window is the choose the destination window. You will need to specify the location from which the data will be copied in the SQL server. Under the destination, the drop-down box selects the Flat File destination item.
In the File name box, establish the CSV file where the data from the SQL database will be exported to and select the next button.
The next window you will see is the Specify Table Copy or Query window, choose the Copy data from one or more tables or views to get all the data from the table.
Next, you’d see a Configure Flat File Destination window, select the table from the source table to export the data to the CSV file you specified earlier.
At this point your file would have been exported, to view the exported file click on preview. To have a sneak peek of the data you just exported.
Complete the exportation process by hitting ‘Next‘. The save and run package window will pop up, click on ‘Next‘.
The Complete Wizard window will appear next, it will give you an overview of all the choices you made during the exporting process. To complete the exportation process, hit on ‘Finish‘.
The exported CSV file will be found in Local Drive, where you specified for it to be exported.
Step 2: Upload to Google Cloud Storage
After completing the exporting process to your local machine, the next step in SQL Server to BigQuery is to transfer the CSV file to Google Cloud Storage(GCS). There are various ways of achieving this, but for the purpose of this blog post, let’s discuss the following methods.
Method 1: Using Gsutil
gsutil is a GCP tool that uses Python programming language. It gives you access to GCS from the command line. To initiate gsutil follow this quickstart link. gsutil provides a unique way to upload a file to GCS from your local machine. To create a bucket in which you copy your file to:
gsutil mb gs://my-new-bucket
The new bucket created is called “my-new-bucket“. Your bucket name must be globally unique. If successful the command returns:
Creating gs://my-new-bucket/...
To copy your file to GCS:
gsutil cp export.csv gs://my-new-bucket/destination/export.csv
In this command, “export.csv” refers to the file you want to copy. “gs://my-new-bucket” represents the GCS bucket you created earlier. Finally, “destination/export.csv” specifies the destination path and filename in the GCS bucket where the file will be copied to.
Integrate from MS SQL Server to BigQueryGet a DemoTry itIntegrate from MS SQL Server to SnowflakeGet a DemoTry it
Method 2: Using Web Console
The web console is another alternative you can use to upload your CSV file to the GCS from your local machine. The steps to use the web console are outlined below.
First, you will have to log in to your GCP account. Toggle on the hamburger menu which displays a drop-down menu. Select Storage and click on the Browser on the left tab.
In order to store the file that you would upload from your local machine, create a new bucket. Make sure the name chosen for the browser is globally unique.
The bucket you just created will appear on the window, click on it and select upload files. This action will direct you to your local drive where you will need to choose the CSV file you want to upload to GCS.
As soon as you start uploading, a progress bar is shown. The bar disappears once the process has been completed. You will be able to find your file in the bucket.
Step 3: Upload Data to BigQuery From GCS
BigQuery is where the data analysis you need will be carried out. Hence you need to upload your data from GCS to BigQuery. There are various methods that you can use to upload your files from GCS to BigQuery. Let’s discuss 2 methods here:
Method 1: Using the Web Console UI
The first point of call when using the Web UI method is to select BigQuery under the hamburger menu on the GCP home page.
Select the “Create a new dataset” icon and fill in the corresponding drop-down menu.
Create a new table under the data set you just created to store your CSV file.
In the create table page –> in the source data section: Select GCS to browse your bucket and select the CSV file you uploaded to GCS – Make sure your File Format is set to CSV.
Fill in the destination tab and the destination table.
Under schema, click on the auto-detect schema.
Select create a table.
After creating the table, click on the destination table name you created to view your exported data file.
Using Command Line Interface, the Activate Cloud Shell icon shown below will take you to the command-line interface. You can also use the auto-detect feature to specify your schema.
Your schema can be specified using the Command-Line. An example is shown below
bq load --autodetect --source_format=CSV --schema=schema.json your_dataset.your_table gs://your_bucket/your_file.csv
In the above example, schema.json refers to the file containing the schema definition for your CSV file. You can customize the schema by modifying the schema.json file to match the structure of your data.
There are 3 ways to write to an existing table on BigQuery. You can make use of any of them to write to your table. Illustrations of the options are given below
1. Overwrite the data
To overwrite the data in an existing table, you can use the --replace flag in the bq command. Here’s an example code:
bq load --replace --source_format=CSV your_dataset.your_table gs://your_bucket/your_file.csv
In the above code, the --replace flag ensures that the existing data in the table is replaced with the new data from the CSV file.
2. Append the table
To append data to an existing table, you can use the --noreplace flag in the bq command. Here’s an example code:
bq load --noreplace --source_format=CSV your_dataset.your_table gs://your_bucket/your_file.csv
The --noreplace flag ensures that the new data from the CSV file is appended to the existing data in the table.
3. Add a new field to the target table. An extra field will be added to the schema.
To add a new field (column) to the target table, you can use the bq update command and specify the schema changes. Here’s an example code:
bq update your_dataset.your_table --schema schema.json
In the above code, schema.json refers to the file containing the updated schema definition with the new field. You need to modify the schema.json file to include the new field and its corresponding data type.
Please note that these examples assume you have the necessary permissions and have set up the required authentication for interacting with BigQuery.
Step 4: Update the Target Table in BigQuery
GCS acts as a staging area for BigQuery, so when you are using Command-Line to upload to BigQuery, your data will be stored in an intermediate table. The data in the intermediate table will need to be updated for the effect to be shown in the target table.
There are two ways to update the target table in BigQuery.
Update the rows in the final table and insert new rows from the intermediate table.
UPDATE final_table t SET t.value = s.value
FROM intermediate_data_table s
WHERE t.id = s.id;
INSERT INTO final_table (id, value)
SELECT id, value
FROM intermediate_data_table
WHERE id NOT IN (SELECT id FROM final_table);
In the above code, final_table refers to the name of your target table, and intermediate_data_table refers to the name of the intermediate table where your data is initially loaded.
2. Delete all the rows from the final table which are in the intermediate table.
DELETE FROM final_table
WHERE id IN (SELECT id FROM intermediate_data_table);
In the above code, final_table refers to the name of your target table, and intermediate_data_table refers to the name of the intermediate table where your data is initially loaded.
Please make sure to replace final_table and intermediate_data_table with the actual table names, you are working with.
This marks the completion of SQL Server to BigQuery connection. Now you can seamlessly sync your CSV files into GCP bucket in order to integrate SQL Server to BigQuery and supercharge your analytics to get insights from your SQL Server database.
Limitations of Manual ETL Process to Set Up Microsoft SQL Server to BigQuery Integration
Businesses need to put systems in place that will enable them to gain the insights they need from their data. These systems have to be seamless and rapid. Using custom ETL scripts to connect MS SQL Server to BigQuery has the followinglimitations that will affect the reliability and speed of these systems:
Writing custom code is only ideal if you’re looking to move your data once from Microsoft SQL Server to BigQuery.
Custom ETL code does not scale well with stream and real-time data. You will have to write additional code to update your data. This is far from ideal.
When there’s a need to transform or encrypt your data, custom ETL code fails as it will require you to add additional processes to your pipeline.
Maintaining and managing a running data pipeline such as this will need you to invest heavily in engineering resources.
BigQuery does not ensure data consistency for external data sources, as changes to the data may cause unexpected behavior while a query is running.
The data set’s location must be in the same region or multi-region as the Cloud Storage Bucket.
CSV files cannot contain nested or repetitive data since the format does not support it.
When utilizing a CSV, including compressed and uncompressed files in the same load job is impossible.
The maximum size of a gzip file for CSV is 4 GB.
While writing code to move data from SQL Server to BigQuery looks like a no-brainer, in the beginning, the implementation and management are much more nuanced than that. The process has a high propensity for errors which will, in turn, have a huge impact on the data quality and consistency.
Benefits of Migrating your Data from SQL Server to BigQuery
Integrating data from SQL Server to BigQuery offers several advantages. Here are a few usage scenarios:
Advanced Analytics: The BigQuery destination’s extensive data processing capabilities allow you to run complicated queries and data analyses on your SQL Server data, deriving insights that would not be feasible with SQL Server alone.
Data Consolidation: If you’re using various sources in addition to SQL Server, synchronizing to a BigQuery destination allows you to centralize your data for a more complete picture of your operations, as well as set up a change data collection process to ensure that there are no discrepancies in your data again.
Historical Data Analysis: SQL Server has limitations with historical data. Syncing data to the BigQuery destination enables long-term data retention and study of historical trends over time.
Data Security and Compliance: The BigQuery destination includes sophisticated data security capabilities. Syncing SQL Server data to a BigQuery destination secures your data and enables comprehensive data governance and compliance management.
Scalability: The BigQuery destination can manage massive amounts of data without compromising speed, making it a perfect solution for growing enterprises with expanding SQL Server data.
Conclusion
This article gave you a comprehensive guide to setting up Microsoft SQL Server to BigQuery integration using 2 popular methods. It also gave you a brief overview of Microsoft SQL Server and Google BigQuery. There are also certain limitations associated with the custom ETL method to connect SQL server to Bigquery.
With LIKE.TG , you can achieve simple and efficient Data Replication from Microsoft SQL Server to BigQuery. LIKE.TG can help you move data from not just SQL Server but 150s of additional data sources.
Visit our Website to Explore LIKE.TG
Businesses can use automated platforms likeLIKE.TG Data to set this integration and handle the ETL process. It helps you directly transfer data from a source of your choice to a Data Warehouse, Business Intelligence tools, or any other desired destination in a fully automated and secure manner without having to write any code and will provide you with a hassle-free experience of connecting your SQL Server to BigQuery instance.
Want to try LIKE.TG ? Sign Up for a 14-day free trialand experience the feature-rich LIKE.TG suite first hand. Have a look at our unbeatableLIKE.TG Pricing, which will help you choose the right plan for you.
Share your experience of loading data from Microsoft SQL Server to BigQuery in the comment section below.

How to Load Google Sheets Data to MySQL: 2 Easy Methods
While Google Sheets provides some impressive features, the capabilities for more advanced Data Visualization and Querying make the transfer from Google Sheets to MySQL Database useful. Are you trying to move data from Google Sheets to MySQL to leverage the power of SQL for data analysis, or are you simply looking to back up data from Google Sheets? Whichever is the case, this blog can surely provide some help.
The article will introduce you to 2 easy methods to move data from Google Sheets to MySQL in real-time. Read along to decide which method suits you the best!
Introduction to Google Sheets
Google Sheets is a free web-based spreadsheet program that Google provides. It allows users to create and edit spreadsheets, but more importantly, it allows multiple users to collaborate on a single document, seeing your collaborators ’ contributions in real-time simultaneously. It’s part of the Google suite of applications, a collection of free productivity apps owned and maintained by Google.
Despite being free, Google Sheets is a fully functional spreadsheet program, with most of the capabilities and features of more expensive spreadsheet software. Google Sheets is compatible with the most popular spreadsheet formats so that you can continue your work. With Google Sheets, like all Google Drive programs, your files are accessible via computer and/or mobile devices.
To learn more about Google Sheets.
Introduction to MySQL
MySQL is an open-source relational database management system or RDMS, and it is managed using Structured Query Language or SQL, hence its name. MySQL was originally developed and owned by Swedish company MySQL AB, but Sun Microsystems acquired MySQL AB in 2008. In turn, Sun Microsystems was then bought by Oracle two years later, making them the present owners of MySQL.
MySQL is a very popular database program that is used in several equally popular systems such as the LAMP stack (Linux, Apache, MySQL, Perl/PHP/Python), Drupal, and WordPress, just to name a few, and is used by many of the largest and most popular websites, including Facebook, Flickr, Twitter, and Youtube. MySQL is also incredibly versatile as it works on various operating systems and system platforms, from Microsoft Windows to Apple MacOS.
Move Google Sheets Data to MySQL Using These 2 Methods
There are several ways that data can be migrated from Google Sheets to MySQL. A common method to import data from Google Sheets to MySQL is by using the Google Sheets API along with MySQL connectors. Out of them, these 2 methods are the most feasible:
Method 1: Manually using the command line
Method 2: Using LIKE.TG to Set Up Google Sheets to MySQL Integration
Load Data from Google Sheets to MySQLGet a DemoTry itLoad Data from Google Ads to MySQLGet a DemoTry itLoad Data from Salesforce to MySQLGet a DemoTry it
Method 1: Connecting Google Sheets to MySQL Manually Using the Command Line
Moving data from Google Sheets to MySQL involves various steps. This example demonstrates how to connect to create a table for the product listing data in Google Sheets, assuming that the data should be in two columns:
Id
Name
To do this migration, you can follow these steps:
Step 1: Prepare your Google Sheets Data
Firstly, you must ensure that the data in your Google Sheets is clean and formatted correctly.
Then, to export your Google Sheets data, click on File > Download and choose a suitable format for MySQL import. CSV (Comma-separated values) is a common choice for this purpose.
After this, your CSV file will get downloaded to your local machine.
Step 2: Create a MySQL database and Table
Login to your MySQL server using the command prompt.
Create a database using the following command:
CREATE DATABASE your_database_name;
Use that Database by running the command:
Use your_database_name;
Now, create a table in your database using the following command:
CREATE TABLE your_table_name (
column1_name column1_datatype,
column2_name column2_datatype,
……
);
Step 3: Upload your CSV data to MySQL
Use the LOAD DATA INFILE command to import the CSV file. The command will look something like this:
LOAD DATA INFILE '/path/to/your/file.csv'
INTO TABLE your_table_name
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
IGNORE 1 ROWS;
Note: The file path should be the absolute path to where the CSV file is stored on the server. If you’re importing the file from your local machine to a remote server, you might need to use tools like PuTTY to download the pscp.exe file. Then, you can use that command to load your CSV file from your local machine to Ubuntu and then import that data to your MySQL database.
After running the above command, your data will be migrated from Google Sheets to MySQL. To understand this better, have a look at an example:
Step 6: Clean Up and Validate
Review the data. Check for any anomalies or issues with the imported data.
Run some queries to validate the imported data.
Limitations and Challenges of Using the Command Line Method to Connect Google Sheets to MySQL
Complex: It requires technical knowledge of SQL and command lines, so it could be difficult for people with no/less technical knowledge to implement.
Error-prone: It provides limited feedback or error messages, making debugging challenging.
Difficult to scale: Scaling command-line solutions for larger datasets or more frequent updates gets trickier and error-prone.
Method 2:Connecting Google Sheets to MySQL Integration Using LIKE.TG .
The abovementioned methods could be time-consuming and difficult to implement for people with little or no technical knowledge. LIKE.TG is a no-code data pipeline platform that can automate this process for you.
You can transfer your Google Sheet data to MySQL using just two steps:
Step 1: Configure the Source
Log into your LIKE.TG Account
Go to Pipelines and select the ‘create’ option.
Select ‘Google Sheets’ as your source.
Fill in all the required fields and click on Test Continue.
Step 2: Configure the Destination
Select MySQL as your destination.
Fill out the required fields and click on Save Continue.
With these extremely simple steps, you have created a data pipeline to migrate your data seamlessly from Google Sheets to MySQL.
Advantages of Using LIKE.TG to Connect Google Sheets to MySQL Database
The relative simplicity of using LIKE.TG as a data pipeline platform, coupled with its reliability and consistency, takes the difficulty out of data projects.
You can also read our article about Google Sheets to Google Data Studio.
It was great. All I had to do was do a one-time setup and the pipelines and models worked beautifully. Data was no more the bottleneck
– Abhishek Gadela, Solutions Engineer, Curefit
Why Connect Google Sheets to MySQL Database?
Real-time Data Updates: By syncing Google Sheets with MySQL, you can keep your spreadsheets up to date without updating them manually.
Centralized Data Management: In MySQL, large datasets are stored and managed centrally to facilitate a consistent view across the various Google Sheets.
Historical Data Analysis: Google Sheets has limits on historical data. Syncing data to MySQL allows for long-term data retention and analysis of historical trends over time.
Scalability: MySQL can handle enormous datasets efficiently, tolerating expansion and complicated data structures better than spreadsheets alone.
Data Security: Control access rights and encryption mechanisms in MySQL to secure critical information
Additional Resources on Google Sheets to MYSQL
More on Google Script Connect To MYSQL
Conclusion
The blog provided a detailed explanation of 2 methods to set up your Google Sheets to MySQL integration.
Although effective, the manual command line method is time-consuming and requires a lot of code. You can use LIKE.TG to import data from Google Sheets to MySQL and handle the ETL process. To learn more about how to import data from various sources to your desired destination, sign up for LIKE.TG ’s 14-day free trial.
FAQ on Google Sheets to MySQL
Can I connect Google Sheets to SQL?
Yes, you can connect Google Sheets to SQL databases.
How do I turn a Google Sheet into a database?
1. Use Google Apps script2. Third-party add-ons3. Use Formulas and Functions
How do I sync MySQL to Google Sheets?
1. Use Google Apps script2. Third-party add-ons3. Google Cloud Functions and Google Cloud SQL
Can Google Sheets pull data from a database?
Yes, Google Sheets can pull data from a database.
How do I import Google Sheets to MySQL?
1. Use Google Apps script2. Third-party add-ons2. CSV Export and Import
Share your experience of connecting Google Sheets to MySQL in the comments section below!

Shopify to MySQL: 2 Easy Methods
var source_destination_email_banner = 'true';
Shopify is an eCommerce platform that enables businesses to sell their products in an online store without spending time and effort on developing the store software.Even though Shopify provides its suite of analytics reports, it is not always easy to combine Shopify data with the organization’s on-premise data and run analysis tasks. Therefore, most organizations must load Shopify data into their relational databases or data warehouses. In this post, we will discuss how to load from Shopify to MySQL, one of the most popular relational databases in use today.
Understanding the Methods to connect Shopify to MySQL
Method 1: Using LIKE.TG to connect Shopify to MySQL
LIKE.TG enables seamless integration of your Shopify data to MySQL Server, ensuring comprehensive and unified data analysis. This simplifies combining and analyzing Shopify data alongside other organizational data for deeper insights.
Get Started with LIKE.TG for Free
Method 2: Using Custom ETL Code to connect Shopify to MySQL
Connect Shopify to MySQL using custom ETL code. This method uses either Shopify’s Export option or REST APIs. The detailed steps are mentioned below.
Method 1: Using LIKE.TG to connect Shopify to MySQL
The best way to avoid the above limitations is to use afully managedData Pipeline platform asLIKE.TG works out of the box. It will automate your data flow in minutes without writing any line of code. Its fault-tolerant architecture makes sure that your data is secure and consistent.
LIKE.TG provides a truly efficient and fully automated solution to manage data in real-time and always has analysis-ready data atMySQL.
With LIKE.TG ’s point-and-click interface, loading data from Shopify to MySQL comes down to 2 simple steps:
Step 1: Connect and configure your Shopify data source by providing the Pipeline Name, Shop Name, and Admin API Password.
Step 2: Input credentials to the MySQL destination where the data needs to be loaded. These include the Destination Name, Database Host, Database Port, Database User, Database Password, and Database Name.
More reasons to love LIKE.TG :
Wide Range of Connectors: Instantly connect and read data from 150+ sources, including SaaS apps and databases, and precisely control pipeline schedules down to the minute.
In-built Transformations: Format your data on the fly with LIKE.TG ’s preload transformations using either the drag-and-drop interface or our nifty Python interface. Generate analysis-ready data in your warehouse using LIKE.TG ’s Postload Transformation
Near Real-Time Replication: Get access to near real-time replication for all database sources with log-based replication. For SaaS applications, near real-time replication is subject to API limits.
Auto-Schema Management: Correcting improper schema after the data is loaded into your warehouse is challenging. LIKE.TG automatically maps source schema with the destination warehouse so that you don’t face the pain of schema errors.
Transparent Pricing: Say goodbye to complex and hidden pricing models. LIKE.TG ’s Transparent Pricing brings complete visibility to your ELT spending. Choose a plan based on your business needs. Stay in control with spend alerts and configurable credit limits for unforeseen spikes in the data flow.
24×7 Customer Support: With LIKE.TG you get more than just a platform, you get a partner for your pipelines. Discover peace with round-the-clock “Live Chat” within the platform. What’s more, you get 24×7 support even during the 14-day free trial.
Security: Discover peace with end-to-end encryption and compliance with all major security certifications including HIPAA, GDPR, and SOC-2.
Sync Data from Shopify to MySQLGet a DemoTry itSync Data from Shopify to MS SQL ServerGet a DemoTry it
Method 2: Using Custom ETL Code to connect Shopify to MySQL
Shopify provides two options to access its product and sales data:
Use the export option in the Shopify reporting dashboard: This method provides a simple click-to-export function that allows you to export products, orders, or customer data into CSV files. The caveat here is that this will be a completely manual process and there is no way to do this programmatically.
Use Shopify rest APIs to access data: Shopify APIs provide programmatic access to products, orders, sales, and customer data. APIs are subject to throttling for higher request rates and use a leaky bucket algorithm to contain the number of simultaneous requests from a single user. The leaky bucket algorithm works based on the analogy of a bucket that leaks at the bottom. The leak rate is the number of requests that will be processed simultaneously and the size of the bucket is the number of maximum requests that can be buffered. Anything over the buffered request count will lead to an API error informing the user of the request rate limit in place.
Let us now move into how data can be loaded to MySQL using each of the above methods:
Step 1: Using Shopify Export Option
Step 2: Using Shopify REST APIs to Access Data
Step 1: Using Shopify Export Option
The first method provides simple click-and-export solutions to get the product, orders, and customer data into CSV. This CSV can then be used to load to a MySQL instance. The below steps detail how Shopify customers’ data can be loaded to MySQL this way.
Go to Shopify admin and go to the customer’s tab.
Click Export.
Select whether you want to export all customers or a specified list of customers. Shopify allows you to select or search customers if you only want to export a specific list.
After selecting customers, select ‘plain CSV’ as the file format.
Click Export Customers and Shopify will provide you with a downloadable CSV file.
Login to MySQL and use the below statement to create a table according to the Shopify format.
CREATE TABLE customers ( id INT(6) UNSIGNED AUTO_INCREMENT PRIMARY KEY, firstname VARCHAR(30) NOT NULL, lastname VARCHAR(30) NOT NULL, email VARCHAR(50), company VARCHAR(50), address1 VARCHAR(50), address2 VARCHAR(50), city VARCHAR(50), province VARCHAR(50), province_code VARCHAR(50), country VARCHAR(50), country_code VARCHAR(50), zip VARCHAR(50), phone VARCHAR(50), accepts_markting VARCHAR(50), total_spent DOUBLE, email VARCHAR(50), total_orders INT, tags VARCHAR(50), notes VARCHAR(50), tax_exempt VARCHAR(50)
Load data using the following command:
LOAD DATA INFILE'customers.csv' INTO TABLE customers FIELDS TERMINATED BY ',' ENCLOSED BY '"' LINES TERMINATED BY 'rn' IGNORE 1 LINES
Now, that was very simple. But, the problem here is that this is a manual process, and programmatically doing this is impossible.
If you want to set up a continuous syncing process, this method will not be helpful. For that, we will need to use the Shopify APIs.
Step 2: Using Shopify REST APIs to Access Data
Shopify provides a large set of APIs that are meant for building applications that interact with Shopify data. Our focus today will be on the product APIs allowing users to access all the information related to products belonging to the specific user account.
We will be using the Shopify private apps mechanism to interact with APIs. Private Apps are Shopify’s way of letting users interact with only a specific Shopify store.
In this case, authentication is done by generating a username and password from the Shopify Admin. If you need to build an application that any Shopify store can use, you will need a public app configuration with OAuth authentication.
Before beginning the steps, ensure you have gone to Shopify Admin and have access to the generated username and password.
Once you have access to the credential, accessing the APIs is very easy and is done using basic HTTP authentication. Let’s look into how the most basic API can be called using the generated username and password.
curl --user:password GET https://shop.myshopify.com/admin/api/2019-10/shop.json
To get a list of all the products in Shopify use the following command:
curl --user user:password GET /admin/api/2019-10/products.json?limit=100
Please note this endpoint is paginated and will return only a maximum of 250 results per page. The default pagination limit is 50 if the limit parameter is not given.
From the initial response, users need to store the id of the last product they received and then use it with the next request to get to the next page:
curl --user user:password GET /admin/api/2019-10/products.json?limit=100since_id=632910392 -o products.json
Where since_id is the last product ID that was received on the previous page.
The response from the API is a nested JSON that contains all the information related to the products such as title, description, images, etc., and more importantly, the variants sub-JSON which provides all the variant-specific information like barcode, price,inventory_quantity, and much more information.
Users need to parse this JSON output and convert the JSON file into a CSV file of the required format before loading it to MySQL.
For this, we are using the Linux command-line utility called jq. You can read more about this utility here. For simplicity, we are only extracting the id, product_type, and product title from the result. Assuming your API response is stored in products.json
Cat products.json | jq '.data[].headers | [.id .product_type product_title] | join(", ")' >> products.csv
Please note you will need to write complicated JSON parsers if you need to retrieve more fields.
Once the CSV files are obtained, create the required MYSQL command beforehand and load data using the ‘LOAD DATA INFILE’ command shown in the previous section.
LOAD DATA INFILE'products.csv' INTO TABLE customers
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY 'rn' ;
Now you have your Shopify product data in your MySQL.
Limitations of Using Custom ETL Code to Connect Shopify to MySQL
Shopify provides two easy methods to retrieve the data into files. But, both these methods are easy only when the requests are one-off and the users do not need to execute them continuously in a programmatic way. Some of the limitations and challenges that you may encounter are as follows:
The above process works fine if you want to bring a limited set of data points from Shopify to MySQL. You will need to write a complicated JSON parser if you need to extract more data points
This approach fits well if you need a one-time or batch data load from Shopify to MySQL. In case you are looking at real-time data sync from Shopify to MySQL, the above method will not work.
An easier way to accomplish this would be using a fully-managed data pipeline solution like LIKE.TG , which can mask all these complexities and deliver a seamless data integration experience from Shopify to MySQL.
Analyze Shopify Data on MySQL using LIKE.TG
[email protected]">
No credit card required
Use Cases of Shopify to MySQL Integration
Connecting data from Shopify to MySQL has various advantages. Here are a few usage scenarios:
Advanced Analytics: MySQL’s extensive data processing capabilities allow you to run complicated queries and data analysis on your Shopify data, resulting in insights that would not be achievable with Shopify alone.
Data Consolidation: If you’re using various sources in addition to Shopify, syncing to MySQL allows you to centralize your data for a more complete picture of your operations, as well as set up a change data capture process to ensure that there are no data conflicts in the future.
Historical Data Analysis: Shopify has limitations with historical data. Syncing data to MySQL enables long-term data retention and trend monitoring over time.
Data Security and Compliance: MySQL offers sophisticated data security measures. Syncing Shopify data to MySQL secures your data and enables advanced data governance and compliance management.
Scalability: MySQL can manage massive amounts of data without compromising performance, making it a perfect alternative for growing enterprises with expanding Shopify data.
Conclusion
This blog talks about the different methods you can use to connect Shopify to MySQL in a seamless fashion: using custom ETL Scripts and a third-party tool, LIKE.TG .
That’s it! No Code, No ETL. LIKE.TG takes care of loading all your data in a reliable, secure, and consistent fashion from Shopify toMySQL.
LIKE.TG can additionally connect to a variety of data sources (Databases, Cloud Applications, Sales and Marketing tools, etc.) making it easy to scale your data infrastructure at will.It helps transfer data fromShopifyto a destination of your choice forfree.
FAQ on Shopify to MySQL
How to connect Shopify to MySQL database?
To connect Shopify to MySQL database, you need to use Shopify’s API to fetch data, then write a script in Python or PHP to process and store this data in MySQL. Finally, schedule the script periodically.
Does Shopify use SQL or NoSQL?
Shopify primarily uses SQL databases for its core data storage and management.
Does Shopify have a database?
Yes, Shopify does have a database infrastructure.
What is the URL for MySQL Database?
The URL for accessing a MySQL database follows this format: mysql://username:password@hostname:port/database_name. Replace username, password, hostname, port, and database_name with your details.
What server is Shopify on?
Shopify operates its infrastructure to host its platform and services.
Sign up for a 14-day free trial. Sign up today to explore how LIKE.TG makes Shopify to MySQL a cakewalk for you!
What are your thoughts about the different approaches to moving data from Shopify to MySQL? Let us know in the comments.

How to Sync Data from PostgreSQL to Google Bigquery in 2 Easy Methods
Are you trying to derive deeper insights from PostgreSQL by moving the data into a Data Warehouse like Google BigQuery? Well, you have landed on the right article. Now, it has become easier to replicate data from PostgreSQL to BigQuery.This article will give you a brief overview of PostgreSQL and Google BigQuery. You will also get to know how you can set up your PostgreSQL to BigQuery integration using 2 methods.
Moreover, the limitations in the case of the manual method will also be discussed in further sections. Read along to decide which method of connecting PostgreSQL to BigQuery is best for you.
Introduction to PostgreSQL
PostgreSQL, although primarily used as an OLTP Database, is one of the popular tools for analyzing data at scale. Its novel architecture, reliability at scale, robust feature set, and extensibility give it an advantage over other databases.
Introduction to Google BigQuery
Google BigQuery is a serverless, cost-effective, and highly scalable Data Warehousing platform with Machine Learning capabilities built-in.
The Business Intelligence Engine is used to carry out its operations. It integrates speedy SQL queries with Google’s infrastructure’s processing capacity to manage business transactions, data from several databases, and access control restrictions for users seeing and querying data.
BigQuery is used by several firms, including UPS, Twitter, and Dow Jones. BigQuery is used by UPS to predict the exact volume of packages for its various services.
BigQuery is used by Twitter to help with ad updates and the combining of millions of data points per second.
The following are the features offered by BigQuery for data privacy and protection of your data. These include:
Encryption at rest
Integration with Cloud Identity
Network isolation
Access Management for granular access control
Methods to Set up PostgreSQL to BigQuery Integration
For the scope of this blog, the main focus will be on Method 1 and detail the steps and challenges. Towards the end, you will also get to know about both methods, so that you have the right details to make a choice. Below are the 2 methods:
Method 1: Using LIKE.TG Data to Set Up PostgreSQL to BigQuery Integration
The steps to load data from PostgreSQL to BigQuery using LIKE.TG Data are as follows:
Step 1: Connect your PostgreSQL account to LIKE.TG ’s platform. LIKE.TG has an in-built PostgreSQL Integration that connects to your account within minutes.
Move Data from PostgreSQL to BigQueryGet a DemoTry itMove Data from Salesforce to BigQueryGet a DemoTry itMove Data from Google Ads to BigQueryGet a DemoTry itMove Data from MongoDB to BigQueryGet a DemoTry it
The available ingestion modes are Logical Replication, Table, and Custom SQL. Additionally, the XMIN ingestion mode is available for Early Access. Logical Replication is the recommended ingestion mode and is selected by default.
Step 2: Select Google BigQuery as your destination and start moving your data.
With this, you have successfully set up Postgres to BigQuery replication using LIKE.TG Data.
Here are more reasons to try LIKE.TG :
Schema Management: LIKE.TG takes away the tedious task of schema management automatically detects the schema of incoming data and maps it to the destination schema.
Incremental Data Load: LIKE.TG allows the transfer of data that has been modified in real-time. This ensures efficient utilization of bandwidth on both ends.
Data Transformation:It provides a simple interface to perfect, modify, and enrich the data you want to transfer.
Method 2: Manual ETL Process to Set Up PostgreSQL to BigQuery Integration
To execute the following steps, you need a pre-existing database and a table populated with PostgreSQL records.
Let’s take a detailed look at each step.
Step 1: Extract Data From PostgreSQL
The data from PostgreSQL needs to be extracted and exported into a CSV file. To do that, write the following command in the PostgreSQL workbench.
COPY your_table_name TO ‘new_file_location\new_file_name’ CSV HEADER
After the data is successfully migrated to a CSV file, you should see the above message on your console.
Step 2: Clean and Transform Data
To upload the data to Google BigQuery, you need the tables and the data to be compatible with the bigQuery format. The following things need to be kept in mind while migrating data to bigQuery:
BigQuery expects CSV data to be UTF-8 encoded.
BigQuery doesn’t enforce Primary Key and unique key constraints. Your ETL process must do so.
Postgres and BigQuery have different column types. However, most of them are convertible. The following table lists common data types and their equivalent conversion type in BigQuery.
You can visit their official page to know more about BigQuery data types.
DATE value must be a dash(-) separated and in the form YYYY-MM-DD (year-month-day).
Fortunately, the default date format in Postgres is the same, YYYY-MM-DD.So if you are simply selecting date columns it should be the incorrect format.
The TO_DATE function in PostgreSQL helps in converting string values into dates.
If the data is stored as a string in the table for any reason, it can be converted while selecting data.
Syntax : TO_DATE(str,format)
Example : SELECT TO_DATE('31,12,1999','%d,%m,%Y');
Result : 1999-12-31
In TIMESTAMP type, the hh:mm:ss (hour-minute-second) portion must use a colon (:) separator.
Similar to the Date type, the TO_TIMESTAMP function in PostgreSQL is used to convert strings into timestamps.
Syntax : TO_TIMESTAMP(str,format)
Example : SELECT TO_TIMESTAMP('2017-03-31 9:30:20','YYYY-MM-DD HH:MI:SS');
Result: 2017-03-31 09:30:20-07
Make sure text columns are quoted if they can potentially have delimiter characters.
Step 3: Upload to Google Cloud Storage(GCS) bucket
If you haven’t already, you need to create a storage bucket in Google Cloud for the next step
3. a) Go to your Google Cloud account and Select the Cloud Storage → Bucket.
3. b) Select a bucket from your existing list of buckets. If you do not have a previously existing bucket, you must create a new one. You can follow Google’s Official documentation to create a new bucket.
3. c) Upload your .csv file into the bucket by clicking the upload file option. Select the file that you want to upload.
Step 4: Upload to BigQuery table from GCS
4. a) Go to the Google Cloud console and select BigQuery from the dropdown. Once you do so, a list of project IDs will appear. Select the Project ID you want to work with and select Create Dataset
4. b) Provide the configuration per your requirements and create the dataset.
Your dataset should be successfully created after this process.
4. c) Next, you must create a table in this dataset. To do so, select the project ID where you had created the dataset and then select the dataset name that was just created. Then click on Create Table from the menu, which appears at the side.
4. d) To create a table, select the source as Google Cloud Storage. Next, select the correct GCS bucket with the .csv file. Then, select the file format that matches the GCS bucket. In your case, it should be in .csv file format. You must provide a table name for your table in the bigQuery database. Select the mapping option as automapping if you want to migrate the data as it is.
4. e) Your table should be created next and loaded with the same data from PostgreSQL.
Step 5: Query the table in BigQuery
After loading the table into bigQuery, you can query it by selecting the QUERY option above the table. You can query your table by writing basic SQL syntax.
Note: Mention the correct project ID, dataset name, and table name.
The above query extracts records from the emp table where the job is manager.
Advantages of manually loading the data from PostgreSQL to BigQuery:
Manual migration doesn’t require setting up and maintaining additional infrastructure, which can save on operational costs.
Manual migration processes are straightforward and involve fewer components, reducing the complexity of the operation.
You have complete control over each step of the migration process, allowing for customized data handling and immediate troubleshooting if issues arise.
By manually managing data transfer, you can ensure compliance with specific security and privacy requirements that might be critical for your organization.
Does PostgreSQL Work As a Data Warehouse?
Yes, you can use PostgreSQL as a data warehouse. But, the main challenges are,
A data engineer will have to build a data warehouse architecture on top of the existing design of PostgreSQL. To store and build models, you will need to create multiple interlinked databases. But, as PostgreSQL lacks the capability for advanced analytics and reporting, this will further limit the use of it.
PostgreSQL can’t handle the data processing of huge data volume. Data warehouses have the features such as parallel processing for advanced queries which PostgreSQL lacks. This level of scalability and performance with minimal latency is not possible with the database.
Limitations of the Manual Method:
The manual migration process can be time-consuming, requiring significant effort to export, transform, and load data, especially if the dataset is large or complex.
Manual processes are susceptible to human errors, such as incorrect data export settings, file handling mistakes, or misconfigurations during import.
If the migration needs to be performed regularly or involves multiple tables and datasets, the repetitive nature of manual processes can lead to inefficiency and increased workload.
Manual migrations can be resource-intensive, consuming significant computational and human resources, which could be utilized for other critical tasks.
Additional Read –
Migrate Data from Postgres to MySQL
PostgreSQL to Oracle Migration
Connect PostgreSQL to MongoDB
Connect PostgreSQL to Redshift
Replicate Postgres to Snowflake
Conclusion
Migrating data from PostgreSQL to BigQuery manually can be complex, but automated data pipeline tools can significantly simplify the process.
We’ve discussed two methods for moving data from PostgreSQL to BigQuery: the manual process, which requires a lot of configuration and effort, and automated tools like LIKE.TG Data.
Whether you choose a manual approach or leverage data pipeline tools like LIKE.TG Data, following the steps outlined in this guide will help ensure a successful migration.
FAQ on PostgreSQL to BigQuery
How do you transfer data from Postgres to BigQuery?
To transfer data from PostgreSQL to BigQuery, export your PostgreSQL data to a format like CSV or JSON, then use BigQuery’s data import tools or APIs to load the data into BigQuery tables.
Can I use PostgreSQL in BigQuery?
No, BigQuery does not natively support PostgreSQL as a database engine. It is a separate service with its own architecture and SQL dialect optimized for large-scale analytics and data warehousing.
Can PostgreSQL be used for Big Data?
Yes, PostgreSQL can handle large datasets and complex queries effectively, making it suitable for big data applications.
How do you migrate data from Postgres to Oracle?
To migrate data from PostgreSQL to Oracle, use Oracle’s Data Pump utility or SQL Developer to export PostgreSQL data as SQL scripts or CSV files, then import them into Oracle using SQL Loader or SQL Developer.

DynamoDB to Snowflake: 3 Easy Steps to Move Data
If you’re looking for DynamoDB Snowflake migration, you’ve come to the right place. Initially, the article provides an overview of the two Database environments while briefly touching on a few of their nuances. Later on, it dives deep into what it takes to implement a solution on your own if you are to attempt the ETL process of setting up and managing a Data Pipeline that moves data from DynamoDB to Snowflake.The article wraps up by pointing out some of the challenges associated with developing a custom ETL solution for loading data from DynamoDB to Snowflake and why it might be worth the investment in having an ETL Cloud service provider, LIKE.TG , implement and manage such a Data Pipeline for you.
Solve your data replication problems with LIKE.TG ’s reliable, no-code, automated pipelines with 150+ connectors.Get your free trial right away!
Overview of DynamoDB and Snowflake
DynamoDB is a fully managed, NoSQL Database that stores data in the form of key-value pairs as well as documents. It is part of Amazon’s Data Warehousing suite of services called Amazon Web Services (AWS). DynamoDB is known for its super-fast data processing capabilities that boast the ability to process more than 20 million requests per second. In terms of backup management for Database tables, it has the option for On-Demand Backups, in addition to Periodic or Continuous Backups.
Snowflake is a fully managed, Cloud Data Warehousing solution available to customers in the form of Software-as-a-Service (SaaS) or Database-as-a-Service (DaaS). Snowflake follows the standard ANSI SQL protocol that supports fully Structured as well as Semi-Structured data like JSON, Parquet, XML, etc. It is highly scalable in terms of the number of users and computing power while offering pricing at per-second levels of resource usage.
How to move data from DynamoDB to Snowflake
There are two popular methods to perform Data Migration from DynamoDB to Snowflake:
Method 1: Build Custom ETL Scripts to move from DynamoDB data to SnowflakeMethod 2: Implement an Official Snowflake ETL Partner such as Hevo Data.
This post covers the first approach in great detail. The blog also highlights the Challenges of Moving Data from DynamoDB to Snowflake using Custom ETL and discusses the means to overcome them.
So, read along to understand the steps to export data from DynamoDB to Snowflake in detail.
Moving Data from DynamoDB to Snowflake using Custom ETL
In this section, you understand the steps to create a Custom Data Pipeline to load data from DynamoDB to Snowflake.
A Data Pipeline that enables the flow of data from DynamoDB to Snowflake can be characterized through the following steps –
Step 1: Set Up Amazon S3 to Receive Data from DynamoDBStep 2: Export Data from DynamoDB to Amazon S3Step 3: Copy Data from Amazon S3 to Snowflake Tables
Step 1: Set Up Amazon S3 to Receive Data from DynamoDB
Amazon S3 is a fully managed Cloud file storage, also part of AWS used to export to and import files from, for a variety of purposes. In this use case, S3 is required to temporarily store the data files coming out of DynamoDB before they are loaded into Snowflake tables. To store a data file on S3, one has to create an S3 bucket first. Buckets are placeholders for all objects that are to be stored on Amazon S3. Using the AWS command-line interface, the following is an example command that can be used to create an S3 bucket:
$aws s3api create-bucket --bucket dyn-sfl-bucket --region us-east-1
Name of the bucket – dyn-sfl-bucket
It is not necessary to create folders in a bucket before copying files over, however, it is a commonly adopted practice, as one bucket can hold a variety of information and folders help with better organization and reduce clutter. The following command can be used to create folders –
aws s3api put-object --bucket dyn-sfl-bucket --key dynsfl/
Folder name – dynsfl
Step 2: Export Data from DynamoDB to Amazon S3
Once an S3 bucket has been created with the appropriate permissions, you can now proceed to export data from DynamoDB. First, let’s look at an example of exporting a single DynamoDB table onto S3. It is a fairly quick process, as follows:
First, you export the table data into a CSV file as shown below.
aws dynamodb scan --table-name YOURTABLE --output text > outputfile.txt
The above command would produce a tab-separated output file which can then be easily converted to a CSV file.
Later, this CSV file (testLIKE.TG .csv, let’s say) could then be uploaded to the previously created S3 bucket using the following command:
$aws s3 cp testLIKE.TG .csv s3://dyn-sfl-bucket/dynsfl/
In reality, however, one would need to export tens of tables, sequentially or parallelly, in a repetitive fashion at fixed intervals (ex: once in a 24 hour period). For this, Amazon provides an option to create Data Pipelines. Here is an outline of the steps involved in facilitating data movement from DynamoDB to S3 using a Data Pipeline:
Create and validate the Pipeline. The following command can be used to create a Data Pipeline:
$aws datapipeline create-pipeline --name dyn-sfl-pipeline --unique-id token { "pipelineId": "ex-pipeline111" }
The next step is to upload and validate the Pipeline using a pre-created Pipeline file in JSON format
$aws datapipeline put-pipeline-definition --pipeline-id ex-pipeline111 --pipeline-definition file://dyn-sfl-pipe-definition.json
Activate the Pipeline. Once the above step is completed with no validation errors, this pipeline can be activated using the following –
$aws datapipeline activate-pipeline --pipeline-id ex-pipeline111
Monitor the Pipeline run and verify the data export. The following command shows the execution status:
$aws datapipeline list-runs --pipeline-id ex-pipeline111
Once the ‘Status Ended’ section indicates completion of the execution, go over to the S3 bucket s3://dyn-sfl-bucket/dynsfl/ and check to see if the required export files are available.
Defining the Pipeline file dyn-sfl-pipe-definition.json can be quite time consuming as there are many things to be defined. Here is a sample file indicating some of the objects and parameters that are to be defined:
{
"objects": [
{
"myComment": "Write a comment here to describe what this section is for and how things are defined",
"id": "dyn-to-sfl",
"failureAndRerunMode":"cascade",
"resourceRole": "DataPipelineDefaultResourceRole",
"role": "DataPipelineDefaultRole",
"pipelineLogUri": "s3://",
"schedule": {
"ref": "DefaultSchedule"
}
"scheduleType": "cron",
"name": "Default"
"id": "Default"
},
{
"type": "Schedule",
"id": "dyn-to-sfl",
"startDateTime" : "2019-06-10T03:00:01"
"occurrences": "1",
"period": "24 hours",
"maxActiveInstances" : "1"
}
],
"parameters": [
{
"description": "S3 Output Location",
"id": "DynSflS3Loc",
"type": "AWS::S3::ObjectKey"
},
{
"description": "Table Name",
"id": "LIKE.TG _dynamo",
"type": "String"
}
]
}
As you can see in the above file definition, it is possible to set the scheduling parameters for the Pipeline execution. In this case, the start date and time are set to June 1st, 2019 early morning and the execution frequency is set to once a day.
Step 3: Copy Data from Amazon S3 to Snowflake Tables
Once the DynamoDB export files are available on S3, they can be copied over to the appropriate Snowflake tables using a ‘COPY INTO’ command that looks similar to a copy command used in a command prompt. It has a ‘source’, a ‘destination’ and a set of parameters to further define the specific copy operation. A couple of ways to use the COPY command are as follows:
File format:
copy into LIKE.TG _sfl
from s3://dyn-sfl-bucket/dynsfl/testLIKE.TG .csv
credentials=(aws_key_id='ABC123' aws_secret_key='XYZabc)
file_format = (type = csv field_delimiter = ',');
Pattern Matching:
copy into LIKE.TG _sfl
from s3://dyn-sfl-bucket/dynsfl/
credentials=(aws_key_id='ABC123' aws_secret_key=''XYZabc)
pattern='*LIKE.TG *.csv';
Just like before, the above is an example of how to use individual COPY commands for quick Ad Hoc Data Migration, however, in reality, this process will be automated and has to be scalable. In that regard, Snowflake provides an option to automatically detect and ingest staged files when they become available in the S3 buckets. This feature is called Automatic Data Loading using Snowpipe.Here are the main features of a Snowpipe:
Snowpipe can be set up in a few different ways to look for newly staged files and load them based on a pre-defined COPY command. An example here is to create a Simple-Queue-Service notification that can trigger the Snowpipe data load.In the case of multiple files, Snowpipe appends these files into a loading queue. Generally, the older files are loaded first, however, this is not guaranteed to happen.Snowpipe keeps a log of all the S3 files that have already been loaded – this helps it identify a duplicate data load and ignore such a load when it is attempted.
Hurray!! You have successfully loaded data from DynamoDB to Snowflake using Custom ETL Data Pipeline.
Challenges of Moving Data from DynamoDB to Snowflake using Custom ETL
Now that you have an idea of what goes into developing a Custom ETL Pipeline to move DynamoDB data to Snowflake, it should be quite apparent that this is not a trivial task.
To further expand on that, here are a few things that highlight the intricacies and complexities of building and maintaining such a Data Pipeline:
DynamoDB export is a heavily involved process, not least because of having to work with JSON files. Also, when it comes to regular operations and maintenance, the Data Pipeline should be robust enough to handle different types of data errors.Additional mechanisms need to be put in place to handle incremental data changes from DynamoDB to S3, as running full loads every time is very inefficient.Most of this process should be automated so that real-time data is available as soon as possible for analysis. Setting everything up with high confidence in the consistency and reliability of such a Data Pipeline can be a huge undertaking.Once everything is set up, the next thing a growing data infrastructure is going to face is scaling. Depending on the growth, things can scale up really quickly and if the existing mechanisms are not built to handle this scale, it can become a problem.
A Simpler Alternative to Load Data from DynamoDB to Snowflake:
Using a No-Code automated Data Pipeline likeLIKE.TG (Official Snowflake ETL Partner), you can move data from DynamoDB to Snowflake in real-time. Since LIKE.TG is fully managed, the setup and implementation time is next to nothing. You can replicate DynamoDB to Snowflake using LIKE.TG ’s visual interface in 3 simple steps:
Connect to your DynamoDB databaseSelect the replication mode: (i) Full dump (ii) Incremental load for append-only data (iii) Incremental load for mutable dataConfigure the Snowflake database and watch your data load in real-time
GET STARTED WITH LIKE.TG FOR FREE
LIKE.TG will now move your data from DynamoDB to Snowflake in a consistent, secure, and reliable fashion. In addition to DynamoDB, LIKE.TG can load data from a multitude of other data sources including Databases, Cloud Applications, SDKs, and more. This allows you to scale up on demand and start moving data from all the applications important for your business.
SIGN UP HERE FOR A 14-DAY FREE TRIAL!
Conclusion
In conclusion, this article offers a step-by-step description of creating Custom Data Pipelines to move data from DynamoDB to Snowflake. It highlights the challenges a Custom ETL solution brings along with it. In a real-life scenario, this would typically mean allocating a good number of human resources for both the development and maintenance of such Data Pipelines to ensure consistent, day-to-day operations. Knowing that it might be worth exploring and investing in a reliable cloud ETL service provider, LIKE.TG offers comprehensive solutions to use cases such as this one and many more.
VISIT OUR WEBSITE TO EXPLORE LIKE.TG
LIKE.TG Data is a No-Code Data Pipeline that offers a faster way to move data from 150+ Data Sources including 50+ Free Sources, into your Data Warehouse like Snowflake to be visualized in a BI tool. LIKE.TG is fully automated and hence does not require you to code.
Want to take LIKE.TG for a spin?
SIGN UP and experience the feature-rich LIKE.TG suite first hand.
What are your thoughts about moving data from DynamoDB to Snowflake? Let us know in the comments.

How to Load Data from PostgreSQL to Redshift: 2 Easy Methods
Are you tired of locally storing and managing files on your Postgres server? You can move your precious data to a powerful destination such as Amazon Redshift, and that too within minutes.Data engineers are given the task of moving data between storage systems like applications, databases, data warehouses, and data lakes. This can be exhaustive and cumbersome. You can follow this simple step-by-step approach to transfer your data from PostgreSQL to Redshift so that you don’t have any problems with your data migration journey.
Why Replicate Data from Postgres to Redshift?
Analytics: Postgres is a powerful and flexible database, but it’s probably not the best choice for analyzing large volumes of data quickly. Redshift is a columnar database that supports massive analytics workloads.
Scalability: Redshift can quickly scale without any performance problems, whereas Postgres may not efficiently handle massive datasets.
OLTP and OLAP: Redshift is designed for Online Analytical Processing (OLAP), making it ideal for complex queries and data analysis. Whereas, Postgres is an Online Transactional Processing (OLTP) database optimized for transactional data and real-time operations.
Load Data from PostgreSQL to RedshiftGet a DemoTry itLoad Data from MongoDB to RedshiftGet a DemoTry itLoad Data from Salesforce to RedshiftGet a DemoTry it
Methods to Connect or Move PostgreSQL to Redshift
Method 1: Connecting Postgres to Redshift Manually
Prerequisites:
Postgres Server installed on your local machine.
Billing enabled AWS account.
Step 1: Configure PostgreSQL to export data as CSV
Step 1. a) Go to the directory where PostgreSQL is installed.
Step 1. b) Open Command Prompt from that file location.
Step 1. c) Now, we need to enter into PostgreSQL. To do so, use the command:
psql -U postgres
Step 1. d) To see the list of databases, you can use the command:
\l
I have already created a database named productsdb here. We will be exporting tables from this database.
This is the table I will be exporting.
Step 1. e) To export as .csv, use the following command:
\copy products TO '<your_file_location><your_file_name>.csv' DELIMITER ',' CSV HEADER;
Note: This will create a new file at the mentioned location.
Go to your file location to see the saved CSV file.
Step 2: Load CSV to S3 Bucket
Step 2. a) Log Into your AWS Console and select S3.
Step 2. b) Now, we need to create a new bucket and upload our local CSV file to it.
You can click Create Bucket to create a new bucket.
Step 2. c) Fill in the bucket name and required details.
Note: Uncheck Block Public Access
Step 2. d) To upload your CSV file, go to the bucket you created.
Click on upload to upload the file to this bucket.
You can now see the file you uploaded inside your bucket.
Step 3: Move Data from S3 to Redshift
Step 3. a) Go to your AWS Console and select Amazon Redshift.
Step 3. b) For Redshift to load data from S3, it needs permission to read data from S3. To assign this permission to Redshift, we can create an IAM role for that and go to security and encryption.
Click on Manage IAM roles followed by Create IAM role.
Note: I will select all s3 buckets. You can select specific buckets and give access to them.
Click Create.
Step 3. c) Go back to your Namespace and click on Query Data.
Step 3. d) Click on Load Data to load data in your Namespace.
Click on Browse S3 and select the required Bucket.
Note: I don’t have a table created, so I will click Create a new table, and Redshift will automatically create a new table.
Note: Select the IAM role you just created and click on Create.
Step 3. e) Click on Load Data.
A Query will start that will load your data from S3 to Redshift.
Step 3. f) Run a Select Query to view your table.
Method 2: Using LIKE.TG Data to connect PostgreSQL to Redshift
Prerequisites:
Access to PostgreSQL credentials.
Billing Enabled Amazon Redshift account.
Signed Up LIKE.TG Data account.
Step 1: Create a new Pipeline
Step 2: Configure the Source details
Step 2. a) Select the objects that you want to replicate.
Step 3: Configure the Destination details.
Step 3. a) Give your destination table a prefix name.
Note: Keep Schema mapping turned on. This feature by LIKE.TG will automatically map your source table schema to your destination table.
Step 4: Your Pipeline is created, and your data will be replicated from PostgreSQL to Amazon Redshift.
Limitations of Using Custom ETL Scripts
These challenges have an impact on ensuring that you have consistent and accurate data available in your Redshift in near Real-Time.
The Custom ETL Script method works well only if you have to move data only once or in batches from PostgreSQL to Redshift.
The Custom ETL Script method also fails when you have to move data in near real-time from PostgreSQL to Redshift.
A more optimal way is to move incremental data between two syncs from Postgres to Redshift instead of full load. This method is called the Change Data Capture method.
When you write custom SQL scripts to extract a subset of data often those scripts break as the source schema keeps changing or evolving.
Additional Resources for PostgreSQL Integrations and Migrations
How to load data from postgresql to biquery
Postgresql on Google Cloud Sql to Bigquery
Migrate Data from Postgres to MySQL
How to migrate Data from PostgreSQL to SQL Server
Export a PostgreSQL Table to a CSV File
Conclusion
This article detailed two methods for migrating data from PostgreSQL to Redshift, providing comprehensive steps for each approach.
The manual ETL process described in the second method comes with various challenges and limitations. However, for those needing real-time data replication and a fully automated solution, LIKE.TG stands out as the optimal choice.
FAQ on PostgreSQL to Redshift
How can the data be transferred from Postgres to Redshift?
Following are the ways by which you can connect Postgres to Redshift1. Manually, with the help of the command line and S3 bucket2. Using automated Data Integration Platforms like LIKE.TG .
Is Redshift compatible with PostgreSQL?
Well, the good news is that Redshift is compatible with PostgreSQL. The slightly bad news, however, is that these two have several significant differences. These differences will impact how you design and develop your data warehouse and applications. For example, some features in PostgreSQL 9.0 have no support from Amazon Redshift.
Is Redshift faster than PostgreSQL?
Yes, Redshift works faster for OLAP operations and retrieves data faster than PostgreSQL.
How to connect to Redshift with psql?
You can connect to Redshift with psql in the following steps1. First, install psql on your machine.2. Next, Use this command to connect to Redshift:psql -h your-redshift-cluster-endpoint -p 5439 -U your-username -d your-database3. It will prompt for the password. Enter your password, and you will be connected to Redshift.
Want to take LIKE.TG for a spin? Sign Up for a 14-day free trial and experience the feature-rich LIKE.TG suite firsthand. Check out ourtransparent pricingto make an informed decision!
Share your understanding of PostgreSQL to Redshift migration in the comments section below!

Connecting Elasticsearch to S3: 4 Easy Steps
Are you trying to derive deeper insights from your Elasticsearch by moving the data into a larger Database like Amazon S3? Well, you have landed on the right article. This article will give you a brief overview of Elasticsearch and Amazon S3. You will also get to know how you can set up your Elasticsearch to S3 integration using 4 easy steps. Moreover, the limitations of the method will also be discussed in further sections. Read along to know more about connecting Elasticsearch to S3 in the further sections.
Note: Currently, LIKE.TG Data doesn’t support S3 as a destination.
What is Elasticsearch?
Elasticsearch accomplishes its super-fast search capabilities through the use of a Lucene-based distributed reverse index. When a document is loaded to Elasticsearch, it creates a reverse index of all the fields in that document.
A reverse index is an index where each of the entries is mapped to a list of documents that contains them. Data is stored in JSON form and can be queried using the proprietary query language.
Elasticsearch has four main APIs – Index API, Get API, Search API, and Put Mapping API:
Index API is used to add documents to the index.
Get API allows to retrieve the documents and Search API enables querying over the index data.
Put Mapping API is used to add additional fields to an already existing index.
The common practice is to use Elasticsearch as part of the standard ELK stack, which involves three components – Elasticsearch, Logstash, and Kibana:
Logstash provides data loading and transformation capabilities.
Kibana provides visualization capabilities.
Together, three of these components form a powerful Data Stack.
Behind the scenes, Elasticsearch uses a cluster of servers to deliver high query performance.
An index in Elasticsearch is a collection of documents.
Each index is divided into shards that are distributed across different servers. By default, it creates 5 shards per index with each shard having a replica for boosting search performance.
Index requests are handled only by the primary shards and search requests are handled by both the shards.
The number of shards is a parameter that is constant at the index level.
Users with deep knowledge of their data can override the default shard number and allocate more shards per index. A point to note is that a low amount of data distributed across a large number of shards will degrade the performance.
Amazon offers a completely managed Elasticsearch service that is priced according to the number of instance hours of operational nodes.
To know more about Elasticsearch, visit this link.
Simplify Data Integration With LIKE.TG ’s No-Code Data Pipeline
LIKE.TG Data, an Automated No-code Data Pipeline, helps you directly transfer data from 150+ sources (including 40+ free sources) like Elasticsearch to Data Warehouses, or a destination of your choice in a completely hassle-free automated manner. LIKE.TG ’s end-to-end Data Management connects you to Elasticsearch’s cluster using the Elasticsearch Transport Client and synchronizes your cluster data using indices. LIKE.TG ’s Pipeline allows you to leverage the services of both Generic Elasticsearch AWS Elasticsearch.
All of this combined with transparent LIKE.TG pricing and 24×7 support makes LIKE.TG the most loved data pipeline software in terms of user reviews.
LIKE.TG ’s consistent reliable solution to manage data in real-time allows you to focus more on Data Analysis, instead of Data Consolidation. Take our 14-day free trial to experience a better way to manage data pipelines.
Get started for Free with LIKE.TG !
What is Amazon S3?
AWS S3 is a fully managed object storage service that is used for a variety of use cases like hosting data, backup and archiving, data warehousing, etc.
Amazon handles all operational activities related to capacity scaling, pre-provisioning, etc and the customers only need to pay for the amount of space that they use. Here are a couple of key Amazon S3 features:
Access Control: It offers comprehensive access controls to meet any kind of organizational and business compliance requirements through an easy-to-use control panel interface.
Support for Analytics: S3 supports analytics through the use of AWS Athena and AWS redshift spectrum through which users can execute SQL queries over data stored in S3.
Encryption: S3 buckets can be encrypted by S3 default encryption. Once enabled, all items in a particular bucket will be encrypted.
High Availability: S3 achieves high availability by storing the data across several distributed servers. Naturally, there is an associated propagation delay with this approach and S3 only guarantees eventual consistency.
But, the writes are atomic; which means at any time, the API will return either the new data or old data. It’ll never provide a corrupted response.
Conceptually S3 is organized as buckets and objects.
A bucket is the highest-level S3 namespace and acts as a container for storing objects. They have a critical role in access control and usage reporting is always aggregated at the bucket level.
An object is the fundamental storage entity and consists of the actual object as well as the metadata. An object is uniquely identified by a unique key and a version identifier.
Customers can choose the AWS regions in which their buckets need to be located according to their cost and latency requirements.
A point to note here is that objects do not support locking and if two PUTs come at the same time, the request with the latest timestamp will win. This means if there is concurrent access, users will have to implement some kind of locking mechanism on their own.
To know more about Amazon S3, visit this link.
Steps to Connect Elasticsearch to S3 Using Custom Code
Moving data from Elasticsearch to S3 can be done in multiple ways.
The most straightforward is to write a script to query all the data from an index and write it into a CSV or JSON file. But the limitations to the amount of data that can be queried at once make that approach a nonstarter.
You will end up with errors ranging from time outs to too large a window of query. So, you need to consider other approaches to connect Elasticsearch to S3.
Logstash, a core part of the ELK stack, is a full-fledged data load and transformation utility.
With some adjustment of configuration parameters, it can be made to export all the data in an elastic index to CSV or JSON. The latest release of log stash also includes an S3 plugin, which means the data can be exported to S3 directly without intermediate storage.
Thus, Logstash can be used to connect Elasticsearch to S3. Let us look in detail into this approach and its limitations.
Using Logstash
Logstash is a service-side pipeline that can ingest data from several sources, process or transform them and deliver them to several destinations.
In this use case, the Logstash input will be Elasticsearch, and the output will be a CSV file.
Thus, you can use Logstash to back up data from Elasticsearch to S3 easily.
Logstash is based on data access and delivery plugins and is an ideal tool for connecting Elasticsearch to S3. For this exercise, you need to install the Logstash Elasticsearch plugin and the Logstash S3 plugin. Below is a step-by-step procedure to connect Elasticsearch to S3:
Step 1: Execute the below command to install the Logstash Elasticsearch plugin.
logstash-plugin install logstash-input-elasticsearch
Step 2: Execute the below command to install the logstash output s3 plugin.
logstash-plugin install logstash-output-s3
Step 3: Next step involves the creation of a configuration for the Logstash execution. An example configuration to execute this is provided below.
input { elasticsearch { hosts => "elastic_search_host" index => "source_index_name" query => ' { "query": { "match_all": {} } } ' } } output { s3{ access_key_id => "aws_access_key" secret_access_key => "aws_secret_key" bucket => "bucket_name" } }
In the above JSON, replace the elastic_search_host with the URL of your source Elasticsearch instance. The index key should have the index name as the value.
The query tries to match every document present in the index. Remember to also replace the AWS access details and the bucket name with your required details.
Create this configuration and name it “es_to_s3.conf”.
Step 4: Execute the configuration using the following command.
logstash -f es_to_s3.conf
The above command will generate JSON output matching the query in the provided S3 location. Depending on your data volume, this will take a few minutes.
Multiple parameters that can be adjusted in the S3 configuration to control variables like output file size etc. A detailed description of all config parameters can be found in Elastic Logstash Reference [8.1].
By following the above-mentioned steps, you can easily connect Elasticsearch to S3.
Here’s What Makes Your Elasticsearch or S3 ETL Experience With LIKE.TG Best In Class
These are some other benefits of having LIKE.TG Data as your Data Automation Partner:
Fully Managed: LIKE.TG Data requires no management and maintenance as LIKE.TG is a fully automated platform.
Data Transformation: It provides a simple interface to perfect, modify, and enrich the data you want to transfer.
Schema Management: LIKE.TG can automatically detect the schema of the incoming data and map it to the destination schema.
Incremental Data Load: LIKE.TG allows the transfer of data that has been modified in real-time. This ensures efficient utilization of bandwidth on both ends.
Live Monitoring: Advanced monitoring gives you a one-stop view to watch all the activities that occur within pipelines.
Live Support: LIKE.TG team is available round the clock to extend exceptional support to its customers through chat, email, and support calls.
LIKE.TG can help you Reduce Data Cleaning Preparation Time and seamlessly replicate your data from 150+ Data sources like Elasticsearch with a no-code, easy-to-setup interface.
Sign up here for a 14-Day Free Trial!
Limitations of Connecting Elasticsearch to S3 Using Custom Code
The above approach is the simplest way to transfer data from an Elasticsearch to S3 without using any external tools. But it does have some limitations. Below are two limitations that are associated while setting up Elasticsearch to S3 integrations:
This approach to connecting Elasticsearch to S3 works fine for a one-time load, but in most situations, the transfer is a continuous process that needs to be executed based on an interval or triggers. To accommodate such requirements, customized code will be required.
This approach to connecting Elasticsearch to S3 is resource-intensive and can hog the cluster depending on the number of indexes and the volume of data that needs to be copied.
Conclusion
This article provided you with a comprehensive guide to Elasticsearch and Amazon S3. You got to know about the methodology to backup Elasticsearch to S3 using Logstash and its limitations as well. Now, you are in the position to connect Elasticsearch to S3 on your own.
The manual approach of connecting Elasticsearch to S3 using Logstash will add complex overheads in terms of time and resources. Such a solution will require skilled engineers and regular data updates. Furthermore, you will have to build an in-house solution from scratch if you wish to transfer your data from Elasticsearch or S3 to a Data Warehouse for analysis.
LIKE.TG Data provides an Automated No-code Data Pipeline that empowers you to overcome the above-mentioned limitations. LIKE.TG caters to 150+ data sources (including 40+ free sources) and can seamlessly transfer your Elasticsearch data to a data warehouse or a destination of your choice in real-time. LIKE.TG ’s Data Pipeline enriches your data and manages the transfer process in a fully automated and secure manner without having to write any code. It will make your life easier and make data migration hassle-free.
Visit our Website to Explore LIKE.TG
Want to take LIKE.TG for a spin? Sign Up for a 14-day free trial and experience the feature-rich LIKE.TG suite firsthand.
What are your thoughts on moving data from Elasticsearch to S3? Let us know in the comments.

How to load data from MySQL to Snowflake using 2 Easy Methods
Relational databases, such as MySQL, have traditionally helped enterprises manage and analyze massive volumes of data effectively. However, as scalability, real-time analytics, and seamless data integration become increasingly important, contemporary data systems like Snowflake have become strong substitutes. After experimenting with a few different approaches and learning from my failures, I’m excited to share my tried-and-true techniques for moving data from MySQL to Snowflake.In this blog, I’ll walk you through two simple migration techniques: manual and automated. I will also share the factors to consider while choosing the right approach. Select the approach that best meets your needs, and let’s get going!
What is MySQL?
MySQL is an open-source relational database management system (RDBMS) that allows users to access and manipulate databases using Structured Query Language (SQL). Created in the middle of the 1990s, MySQL’s stability, dependability, and user-friendliness have made it one of the most widely used databases worldwide. Its structured storage feature makes it ideal for organizations that require high-level data integrity, consistency, and reliability. Some significant organizations that use MySQL include Amazon, Uber, Airbnb, and Shopify.
Key Features of MySQL :
Free to Use: MySQL is open-source, so that you can download, install, and use it without any licensing costs. This allows you to use all the functionalities a robust database management system provides without many barriers. However, for large organizations, it also offers commercial versions like MySQL Cluster Carrier Grade Edition and MySQL Enterprise Edition.
Scalability: Suitable for both small and large-scale applications.
What is Snowflake?
Snowflake is a cloud-based data warehousing platform designed for high performance and scalability. Unlike traditional databases, Snowflake is built on a cloud-native architecture, providing robust data storage, processing, and analytics capabilities.
Key Features of Snowflake :
Cloud-Native Architecture: Fully managed service that runs on cloud platforms like AWS, Azure, and Google Cloud.
Scalability and Elasticity: Automatically scales compute resources to handle varying workloads without manual intervention.
Why move MySQL data to Snowflake?
Performance and Scalability: MySQL may experience issues managing massive amounts of data and numerous user queries simultaneously as data quantity increases. Snowflake’s cloud-native architecture, which offers nearly limitless scalability and great performance, allows you to handle large datasets and intricate queries effectively.
Higher Level Analytics: Snowflake offers advanced analytical features like data science and machine learning workflow assistance. These features can give you deeper insights and promote data-driven decision-making.
Economy of Cost: Because Snowflake separates computation and storage resources, you can optimize your expenses by only paying for what you utilize. The pay-as-you-go approach is more economical than the upkeep and expansion of MySQL servers situated on-site.
Data Integration and Sharing: Snowflake’s powerful data-sharing features make integrating and securely exchanging data easier across departments and external partners. This skill is valuable for firms seeking to establish a cohesive data environment.
Streamlined Upkeep: Snowflake removes the need for database administration duties, which include software patching, hardware provisioning, and backups. It is a fully managed service that enables you to concentrate less on maintenance and more on data analysis.
Sync your Data from MySQL to SnowflakeGet a DemoTry itSync your Data from Salesforce to SnowflakeGet a DemoTry itSync your Data from MongoDB to SnowflakeGet a DemoTry it
Methods to transfer data from MySQL to Snowflake:
Method 1: How to Connect MySQL to Snowflake using Custom Code
Prerequisites
You should have a Snowflake Account. If you don’t have one, check out Snowflake and register for a trial account.
A MySQL server with your database. You can download it from MySQL’s official website if you don’t have one.
Let’s examine the step-by-step method for connecting MySQL to Snowflake using the MySQL Application Interface and Snowflake Web Interface.
Step 1: Extract Data from MySQL
I created a dummy table called cricketers in MySQL for this demo.
You can click on the rightmost table icon to view your table.
Next, we need to save a .csv file of this table in our local storage to later load it into Snowflake.
You can do this by clicking on the icon next to Export/Import.
This will automatically save a .csv file of the table that is selected on your local storage.
Step 2: Create a new Database in Snowflake
Now, we need to import this table into Snowflake.
Log into your Snowflake account, click Data>Databases, and click the +Database icon on the right-side panel to create a new database.
For this guide, I have already made a database called DEMO.
Step 3: Create a new Table in that database
Now click DEMO>PUBLIC>Tables, click the Create button, and select the From File option from the drop-down menu.
A Dropbox will appear where you can drag and drop your .csv file.
Select and create a new table and give it a name.
You can also choose from existing tables, and your data will be appended to that table.
Step 4: Edit your table schema
Click next. In this dialogue box, you can edit the schema.
After modifying the schema according to your needs, click the load button.
This will start loading your table data from the .csv file to Snowflake.
Step 5: Preview your loaded table
Once the loading process has been completed, you can view your data by clicking the preview button.
Note: An alternative method of moving data is to create an Internal/External stage in Snowflake and load data into it.
Limitations of Manually Migrating Data from MySQL to Snowflake:
Error-prone: Custom coding and SQL Queries introduce a higher risk of errors potentially leading to data loss or corruption.
Time-Consuming: Handling tables for large datasets is highly time-consuming.
Orchestration Challenges: Manually migrating data needs more monitoring, alerting, and progress-tracking features.
Method 2: How to Connect MySQL to Snowflake using an Automated ETL Platform
Prerequisites:
To set up your pipeline, you need a LIKE.TG account. If you don’t have one, you can visit LIKE.TG .
A Snowflake account.
A MySQL server with your database.
Step 1:Connect your MySQL account to LIKE.TG ’s Platform.
To begin with, I am logging in to my LIKE.TG platform. Next, create a new pipeline by clicking the Pipelines and the +Create button.
LIKE.TG provides built-in MySQL integration that can connect to your account within minutes. Choose MySQL as the source and fill in the necessary details.
Enter your Source details and click on TEST CONTINUE.
Next, Select all the objects that you want to replicate. Objects are nothing but the tables.
Step 2: Connect your Snowflake account to LIKE.TG ’s Platform
You have successfully connected your source and destination with these two simple steps. From here, LIKE.TG will take over and move your valuable data from MySQL to Snowflake.
Advantages of using LIKE.TG :
Auto Schema Mapping: LIKE.TG eliminates the tedious task of schema management. It automatically detects the schema of incoming data and maps it to the destination schema.
Incremental Data Load: Allows the transfer of modified data in real-time, ensuring efficient bandwidth utilization on both ends.
Data Transformation: It provides a simple interface for perfecting, modifying, and enriching the data you want to transfer.
Note: Alternatively, you can use SaaS ETL platforms like Estuary or Airbyte to migrate your data.
Best Practices for Data Migration:
Examine Data and Workloads: Before migrating, constantly evaluate the schema, volume of your data, and kinds of queries currently running in your MySQL databases.
Select the Appropriate Migration Technique:
Handled ETL Procedure: This procedure is appropriate for smaller datasets or situations requiring precise process control. It requires manually loading data into Snowflake after exporting it from MySQL (for example, using CSV files).
Using Snowflake’s Staging: For larger datasets, consider utilizing either the internal or external stages of Snowflake. Using a staging area, you can import the data into Snowflake after exporting it from MySQL to a CSV or SQL dump file.
Validation of Data and Quality Assurance:
Assure data integrity before and after migration by verifying data types, restrictions, and completeness.
Verify the correctness and consistency of the data after migration by running checks.
Enhance Information for Snowflake:
Take advantage of Snowflake’s performance optimizations.
Utilize clustering keys to arrange information.
Make use of Snowflake’s built-in automatic query optimization tools.
Think about using query pattern-based partitioning methods.
Manage Schema Changes and Data Transformations:
Adjust the MySQL schema to meet Snowflake’s needs.
Snowflake supports semi-structured data, although the structure of the data may need to be changed.
Plan the necessary changes and carry them out during the migration process.
Verify that the syntax and functionality of SQL queries are compatible with Snowflake.
Troubleshooting Common Issues :
Problems with Connectivity:
Verify that Snowflake and MySQL have the appropriate permissions and network setup.
Diagnose connectivity issues as soon as possible by utilizing monitoring and logging technologies.
Performance bottlenecks:
Track query performance both before and after the move.
Optimize SQL queries for the query optimizer and architecture of Snowflake.
Mismatches in Data Type and Format:
Identify and resolve format and data type differences between Snowflake and MySQL.
When migrating data, make use of the proper data conversion techniques.
Conclusion:
You can now seamlessly connect MySQL to Snowflake using manual or automated methods. The manual method will work if you seek a more granular approach to your migration. However, if you are looking for an automated and zero solution for your migration, book a demo with LIKE.TG .
FAQ on MySQL to Snowflake
How to transfer data from MySQL to Snowflake?
Step 1: Export Data from MySQLStep 2: Upload Data to SnowflakeStep 3: Create Snowflake TableStep 4: Load Data into Snowflake
How do I connect MySQL to Snowflake?
1. Snowflake Connector for MySQL2. ETL/ELT Tools3. Custom Scripts
Does Snowflake use MySQL?
No, Snowflake does not use MySQL.
How to get data from SQL to Snowflake?
Step 1: Export DataStep 2: Stage the DataStep 3: Load Data
How to replicate data from SQL Server to Snowflake?
1. Using ETL/ELT Tools2. Custom Scripts3. Database Migration Services

How To Migrate a MySQL Database Between Two Servers
There are many use cases when you must migrate MySQL database between 2 servers, like cloning a database for testing, a separate database for running reports, or completely migrating a database system to a new server. Broadly, you will take a data backup on the first server, transfer it remotely to the destination server, and finally restore the backup on the new MySQL instance. This article will walk you through the steps to migrate MySQL Database between 2 Servers using 3 simple steps. Additionally, we will explore the process of performing a MySQL migration, using copy MySQL database from one server to another operation. This process is crucial when you want to move your MySQL database to another server without losing any data or functionality.
We will cover the necessary steps and considerations involved in successfully completing a MySQL migration. So, whether you are looking to clone a database, create a separate database for reporting purposes, or completely migrate your database to a new server, this guide will provide you with the information you need.
Steps to Migrate MySQL Database Between 2 Servers
Let’s understand the steps to migrate the MySQL database between 2 servers. Understanding the process of transferring MySQL databases from one server to another is crucial for maintaining data integrity and continuity of services. To migrate MySQL database seamlessly, ensure both source and target servers are compatible.
Below are the steps you can follow to understand how to migrate MySQL database between 2 servers:
Step 1: Backup the Data
Step 2:Copy the Database Dump on the Destination Server
Step 3: Restore the Dump‘
Want to migrate your SQL data effortlessly?
Check out LIKE.TG ’s no-code data pipeline that allows you to migrate data from any source to a destination with just a few clicks. Start your 14 days trial now for free!
Get Started with LIKE.TG for Free
1) Backup the Data
The first step to migrate MySQL database is to take a dump of the data that you want to transfer. This operation will help you move mysql database to another server. To do that, you will have to use mysqldump command. The basic syntax of the command is:
mysqldump -u [username] -p [database] > dump.sql
If the database is on a remote server, either log in to that system using ssh or use -h and -P options to provide host and port respectively.
mysqldump -P [port] -h [host] -u [username] -p [database] > dump.sql
There are various options available for this command, let’s go through the major ones as per the use case.
A) Backing Up Specific Databases
mysqldump -u [username] -p [database] > dump.sql
This command dumps specified databases to the file.
You can specify multiple databases for the dump using the following command:
mysqldump -u [username] -p --databases [database1] [database2] > dump.sql
You can use the –all-databases option to backup all databases on the MySQL instance.
mysqldump -u [username] -p --all-databases > dump.sql
B) Backing Up Specific Tables
The above commands dump all the tables in the specified database, if you need to take backup of some specific tables, you can use the following command:
mysqldump -u [username] -p [database] [table1] [table2] > dump.sql
C) Custom Query
If you want to backup data using some custom query, you will need to use the where option provided by mysqldump.
mysqldump -u [username] -p [database] [table1] --where="WHERE CLAUSE" > dump.sql
Example: mysqldump -u root -p testdb table1 --where="mycolumn = myvalue" > dump.sql
Note:
By default, mysqldump command includes DROP TABLE and CREATE TABLE statements in the created dump. Hence, if you are using incremental backups or you specifically want to restore data without deleting previous data, make sure you use the –no-create-info option while creating a dump.
mysqldump -u [username] -p [database] --no-create-info > dump.sql
If you need just to copy the schema but not the data, you can use –no-data option while creating the dump.
mysqldump -u [username] -p [database] --no-data > dump.sql
Other use cases
Here’s a list of uses for the mysqldump command based on use cases:
To backup a single database:
mysqldump -u [username] -p [database] > dump.sql
To backup multiple databases:
mysqldump -u [username] -p --databases [database1] [database2] > dump.sql
To backup all databases on the instance:
mysqldump -u [username] -p --all-databases > dump.sql
To backup specific tables:
mysqldump -u [username] -p [database] [table1] [table2] > dump.sql
To backup data using some custom query:
mysqldump -u [username] -p [database] [table1] --where="WHERE CLAUSE" > dump.sql
Example:
mysqldump -u root -p testdb table1 --where="mycolumn = myvalue" > dump.sql
To copy only the schema but not the data:
mysqldump -u [username] -p [database] --no-data > dump.sq
To restore data without deleting previous data (incremental backups):
mysqldump -u [username] -p [database] --no-create-info > dump.sql
2) Copy the Database Dump on the Destination Server
Once you have created the dump as per your specification, the next step to migrate MySQL database is to use the data dump file to move the MySQL database to another server (destination). You will have to use the “scp” command for that.
Scp -P [port] [dump_file].sql [username]@[servername]:[path on destination]
Examples:
scp dump.sql [email protected]:/var/data/mysql
scp -P 3306 dump.sql [email protected]:/var/data/mysql
To copy to a single database, use this syntax:
scp all_databases.sql [email protected]:~/
For a single database:
scp database_name.sql [email protected]:~/
Here’s an example:
scp dump.sql [email protected]:/var/data/mysql scp -P 3306 dump.sql [email protected]
3) Restore the Dump
The last step in MySQL migration is restoring the data on the destination server. MySQL command directly provides a way to restore to dump data to MySQL.
mysql -u [username] -p [database] < [dump_file].sql
Example:
mysql -u root -p testdb < dump.sql
Don’t specify the database in the above command if your dump includes multiple databases.
mysql -u root -p < dump.sql
For all databases:
mysql -u [user] -p --all-databases < all_databases.sql
For a single database:
mysql -u [user] -p newdatabase < database_name.sql
For multiple databases:
mysql -u root -p < dump.sql
Limitations with Dumping and Importing MySQL Data
Dumping and importing MySQL data can present several challenges:
Time Consumption: The process can be time-consuming, particularly for large databases, due to creating, transferring, and importing dump files, which may slow down with network speed and database size.
Potential for Errors: Human error is a significant risk, including overlooking steps, misconfiguring settings, or using incorrect parameters with the mysqldump command.
Data Integrity Issues: Activities on the source database during the dump process can lead to data inconsistencies in the exported SQL dump. Measures like putting the database in read-only mode or locking tables can mitigate this but may impact application availability.
Memory Limitations: Importing massive SQL dump files may encounter memory constraints, necessitating adjustments to MySQL server configurations on the destination machine.
Migrate MySQL to MySQLGet a DemoTry itMigrate MySQL to BigQueryGet a DemoTry itMigrate MySQL to SnowflakeGet a DemoTry it
Conclusion
Following the above-mentioned steps, you can migrate MySQL database between two servers easily, but to migrate MySQL database to another server can be quite cumbersome activity especially if it’s repetitive. An all-in-one solution like LIKE.TG takes care of this effortlessly and helps manage all your data pipelines in an elegant and fault-tolerant manner.
LIKE.TG will automatically catalog all your table schemas and do all the necessary transformations to copy MySQL database from one server to another. LIKE.TG will fetch the data from your source MySQL server incrementally and restore that seamlessly onto the destination MySQL instance. LIKE.TG will also alert you through email and Slack if there are schema changes or network failures. All of this can be achieved from the LIKE.TG UI, with no need to manage servers or cron jobs.
VISIT OUR WEBSITE TO EXPLORE LIKE.TG
Want to take LIKE.TG for a spin?
Sign Up for a 14-day free trial and experience the feature-rich LIKE.TG suite firsthand. You can also have a look at the unbeatable LIKE.TG pricing that will help you choose the right plan for your business needs.
Share your experience of learning about the steps to migrate MySQL database between 2 servers in the comments section below.

How to load data from Facebook Ads to Google BigQuery
Leveraging the data from Facebook Ads Insights offers businesses a great way to measure their target audiences. However, transferring massive amounts of Facebook ad data to Google BigQuery is no easy feat. If you want to do just that, you’re in luck. In this article, we’ll be looking at how you can migrate data from Facebook Ads to BigQuery.Understanding the Methods to Connect Facebook Ads to BigQuery
Load Data from Facebook Ads to BigQueryGet a DemoTry itLoad Data from Google Analytics to BigQueryGet a DemoTry itLoad Data from Google Ads to BigQueryGet a DemoTry it
These are the methods you can use to move data from Facebook Ads to BigQuery:
Method 1: Using LIKE.TG to Move Data from Facebook Ads to BigQuery
Method 2: Writing Custom Scripts to Move Data from Facebook Ads to BigQuery
Method 3: Manual Upload of Data from Facebook Ads to BigQuery
Method 1: Using LIKE.TG to Move Data from Facebook Ads to BigQuery
LIKE.TG is the only real-time ELT No-code Data Pipeline platform that cost-effectively automates data pipelines that are flexible to your needs. With integration with 150+ Data Sources (40+ free sources), we help you not only export data from sources load data to the destinations but also transform enrich your data, make it analysis-ready.
Get Started with LIKE.TG for Free
LIKE.TG can help you load data in two simple steps:
Step 1: Connect Facebook Ads Account as Source
Follow the below steps to set up Facebook Ads Account as source:
In the Navigation Bar, Click PIPELINES.
Click + CREATE in the Pipelines List View.
From the Select Source Type page, select Facebook Ads.
In the Configure your Facebook Ads account page, you can do one of the following:
Select a previously configured account and click CONTINUE.
Click Add Facebook Ads Account and follow the below steps to configure an account:
Log in to your Facebook account, and in the pop-up dialog, click Continue as <Company Name>
Click Save to authorize LIKE.TG to access your Facebook Ads and related statistics.
Click Got it in the confirmation dialog.
Configure your Facebook Ads as a source by providing the Pipeline Name, authorized account, report type, aggregation level, aggregation time, breakdowns, historical sync duration, and key fields.
Step 2:Configure Google BigQuery as your Destination
Click DESTINATIONS in the Navigation Bar.
In the Destinations List View, Click + CREATE.
Select Google BigQuery as the Destination type in the Add Destination page.
Connect to your BigQuery account and start moving your data from Facebook Ads to BigQuery by providing the project ID, dataset ID, Data Warehouse name, GCS bucket.
Simplify your data analysis with LIKE.TG today and Sign up here for a 14-day free trial!.
Method 2: Writing Custom Scripts to Move Data from Facebook Ads to BigQuery
Migrating data from Facebook Ads Insights to Google BigQuery essentially involves two key steps:
Step 1: Pulling Data from Facebook
Step 2: Loading Data into BigQuery
Step 1: Pulling Data from Facebook
Put simply, pulling data from Facebook involves downloading the relevant Ads Insights data, which can be used for a variety of business purposes. Currently, there are two main methods for users to pull data from Facebook:
Through APIs.
Through Real-time streams.
Method 1: Through APIs
Users can access Facebook’s APIs through the different SDKs offered by the platform. While Python and PHP are the main languages supported by Facebook, it’s easy to find community-supported SDKs for languages such as JavaScript, R, and Ruby.
What’s more, the Facebook Marketing API is relatively easy to use – which is why it can be harnessed to execute requests that direct to specific endpoints. Also, since the Facebook Marketing API is a RESTful API, you can interact with it via your favorite framework or language.
Like everything else Facebook-related, Ads and statistics data form part of and can be acquired through the Graph API, and any requests for statistics specific to particular ads can be sent to Facebook Insights. In turn, Insights will reply to such requests with more information on the queried ad object.
If the above seems overwhelming, there’s no need to worry and we’ll be taking a look at an example to help simplify things. Suppose you want to extract all stats relevant to your account. This can be done by executing the following simple request through curl:
curl -F 'level=campaign' -F 'fields=[]' -F 'access_token=<ACCESS_TOKEN>'
https://graph.facebook.com/v2.5/<CAMPAIGN_ID>/insights
curl -G -d 'access_token=<ACCESS_TOKEN>' https://graph.facebook.com/v2.5/1000002
curl -G -d 'access_token=<ACCESS_TOKEN>' https://graph.facebook.com/v2.5/1000002/insights
Once it’s ready, the data you’ve requested will then be returned in either CSV or XLS format and be able to access it via a URL such as the one below:
https://www.facebook.com/ads/ads_insights/export_report?report_run_id=<REPORT_ID>
format=<REPORT_FORMAT>access_token=<ACCESS_TOKEN
Method 2: Through Real-time Streams
You can also pull data from Facebook by creating a real-time data substructure and can even load your data into the data warehouse. All you need to do to achieve all this and to receive API updates is to subscribe to real-time updates.
Using the right substructure, you’ll be able to stream an almost real-time data feed to your database, and by doing so, you’ll be kept up-to-date with the latest data.
Facebook Ads boasts a tremendously rich API that offers users the opportunity to extract even the smallest portions of data regarding accounts and target audience activities. More importantly, however, is that all of this real-time data can be used for analytics and reporting purposes.
However, there’s a minor consideration that needs to be mentioned. It’s no secret that these resources become more complex as they continue to grow, meaning you’ll need a complex protocol to handle them and it’s worth keeping this in mind as the volume of your data grows with each passing day.
Moving on, the data that you pull from Facebook can be in one of a plethora of different formats, yet BigQuery isn’t compatible with all of them. This means that it’s in your best interest to convert data into a format supported by BigQuery after you’ve pulled it from Facebook.
For example, if you pull XML data, then you’ll need to convert it into any of the following data formats:
CSV
JSON.
You should also make sure that BigQuery supports the BigQuery data types you’re using. BigQuery currently supports the following data types:
STRING
INTEGER
FLOAT
BOOLEAN
RECORD
TIMESTAMP
Please refer to Google’s documentation on preparing data for BigQuery, to learn more.
Now that you’ve understood the different data formats and types supported by BigQuery, it’s time to learn how to pull data from Facebook.
Step 2: Loading Data Into BigQuery
If you opt to use Google Cloud Storage to load data from Facebook Ads into BigQuery, then you’ll need to first load the data into Google Cloud Storage. This can be done in one of a few ways.
First and foremost, this can be done directly through the console. Alternatively, you can post data with the help of the JSON API. One thing to note here is that APIs play a crucial role, both in pulling data from Facebook Ads and loading data into Bigquery.
Perhaps the simplest way to load data into BigQuery is by requesting HTTP POST using tools such as curl. Should you decide to go this route, your POST request should look something like this:
POST /upload/storage/v1/b/myBucket/o?uploadType=medianame= TEST HTTP/1.1
Host: www.googleapis.com Content-Type: application/text
Content-Length: number_of_bytes_in_file
Authorization: Bearer your_auth_token your Facebook Ads data
And if you enter everything correctly you’ll get a response that looks like this:
HTTP/1.1 200 Content-Type: application/json { "name": "TEST" }
However, remember that tools like curl are only useful for testing purposes. So, you’ll need to write specific codes to send data to Google if you want to automate the data loading process.
This can be done in one of the following languages when using the Google App Engine to write codes:
Python
Java
PHP
Go
Apart from coding for the Google App Engine, the above languages can even be used to access Google Cloud Storage.
Once you’ve imported your extracted data into Google Cloud Storage, you’ll need to create and run a LoadJob, which directs to the data that needs to be imported from the cloud and will ultimately load the data into BigQuery. This works by specifying source URLs that point to the queried objects.
This method makes use of POST requests for storing data in the Google Cloud Storage API, from where it will load the data into BigQuery.
Another method to accomplish this is by posting a direct HTTP POST request to BigQuery with the data you’d like to query. While this method is very similar to loading data through the JSON API, it differs by using specific BigQuery end-points to load data directly. Furthermore, the interaction is quite simple and can be carried out via either the framework or the HTTP client library of your preferred language.
Limitations of using Custom Scripts to Connect Facebook Ads to BigQuery
Building a custom code for transfer data from Facebook Ads to Google BigQuery may appear to be a practically sound arrangement. However, this approach comes with some limitations too.
Code Maintenance: Since you are building the code yourself, you would need to monitor and maintain it too. On the off chance that Facebook refreshes its API or the API sends a field with a datatype which your code doesn’t understand, you would need to have resources that can handle these ad-hoc requests.
Data Consistency: You additionally will need to set up a data validation system in place to ensure that there is no data leakage in the infrastructure.
Real-time Data: The above approach can help you move data one time from Facebook Ads to BigQuery. If you are looking to analyze data in real-time, you will need to deploy additional code on top of this.
Data Transformation Capabilities: Often, there will arise a need for you to transform the data received from Facebook before analyzing it. Eg: When running ads across different geographies globally, you will want to convert the timezones and currencies from your raw data and bring them to a standard format. This would require extra effort.
Utilizing a Data Integration stage like LIKE.TG frees you of the above constraints.
Method 3: Manual Upload of Data from Facebook Ads to BigQuery
This is an affordable solution for moving data from Facebook Ads to BigQuery. These are the steps that you can carry out to load data from Facebook Ads to BigQuery manually:
Step 1: Create a Google Cloud project, after which you will be taken to a “Basic Checklist”. Next, navigate to Google BigQuery and look for your new project.
Step 2: Log In to Facebook Ads Manager and navigate to the data you wish to query in Google BigQuery. If you need daily data, you need to segment your reports by day.
Step 3: Download the data by selecting “Reports” and then click on “Export Table Data”. Export your data as a .csv file and save it on your PC.
Step 4: Navigate back to Google BigQuery and ensure that your project is selected at the top of the screen. Click on your project ID in the left-hand navigation and click on “+ Create Dataset”
Step 5: Provide a name for your dataset and ensure that an encryption method is set. Click on “Create Dataset” followed by clicking on the name of your new dataset in the left-hand navigation. Next, click on “Create Table” to finish this step.
Step 6: Go to the source section, then create your table from the Upload option. Find your Facebook Ads report that you saved to your PC and choose file format as CSV. In the destination section, select “Search for a project”. Next, find your project name from the dropdown list. Select your dataset name and the name of the table.
Step 7: Go to the schema section and click on the checkbox to allow BigQuery to either auto-detect a schema or click on “Edit as Text” to manually name schema, set mode, and type.
Step 8: Go to the Partition and Cluster Settings section and choose “Partition by Ingestion Time” or “No partitioning” based on your needs. Partitioning splits your table into smaller segments that allow smaller sections of data to be queried quickly. Next, navigate to Advanced options and set the field delimiter like a comma.
Step 9: Click “Create table”. Your Data Warehouse will begin to populate with Facebook Ads data. You can check your Job History for the status of your data load. Navigate to Google BigQuery and click on your dataset ID.
Step 10: You can write SQL queries against your Facebook data in Google BigQuery, or export your data to Google Data Studio along with other third-party tools for further analysis. You can repeat this process for all additional Facebook data sets you wish to upload and ensure fresh data availability.
Limitations of Manual Upload of Data from Facebook Ads to BigQuery
Data Extraction: Downloading data from Facebook Ads manually for large-scale data is a daunting and time-consuming task.
Data Uploads: A manual process of uploading will need to be watched and involved in continuously.
Human Error: In a manual process, errors such as mistakes in data entry, omitted uploads, and duplication of records can take place.
Data Integrity: There is no automated assurance mechanism to ensure that integrity and consistency of the data.
Delays: Manual uploads run the risk of creating delays in availability and the real integration of data for analysis.
Benefits of sending data from Facebook Ads to Google BigQuery
Identify patterns with SQL queries: To gain deeper insights into your ad performance, you can use advanced SQL queries. This helps you to analyze data from multiple angles, spot patterns, and understand metric correlations.
Conduct multi-channel ad analysis: You can integrate your Facebook Ads data with metrics from other sources like Google Ads, Google Analytics 4, CRM, or email marketing apps. By doing this, you can analyze your overall marketing performance and understand how different channels work together.
Analyze ad performance in-depth: You can carry out a time series analysis to identify changes in ad performance over time and understand how factors like seasonality impact ad performance.
Leverage ML algorithms: You can also build ML models and train them to forecast future performance, identify which factors drive ad success, and optimize your campaigns accordingly.
Data Visualization: Build powerful interactive dashboards by connecting BigQuery to PowerBI, Looker Studio (former Google Data Studio), or another data visualization tool. This enables you to create custom dashboards that showcase your key metrics, highlight trends, and provide actionable insights to drive better marketing decisions.
Use Cases of Loading Facebook Ads to BigQuery
Marketing Campaigns: Analyzing facebook ads audience data in bigquery can help you to enhance the performance of your marketing campaigns. Advertisement data from Facebook combined with business data in BigQuery can give better insights for decision-making.
Personalized Audience Targeting: On Facebook ads conversion data in BigQuery, you can utilize BigQuery’s powerful querying capabilities to segment audiences based on detailed demographics, interests, and behaviors extracted from Facebook Ads data.
Competitive Analysis: You can compare your Facebook attribution data in BigQuery to understand the Ads performance of industry competitors using publicly available data sources.
Get Real-time Streams of Your Facebook Ad Statistics
You can easily create a real-time data infrastructure for extracting data from Facebook Ads and loading them into a Data Warehouse repository. You can achieve this by subscribing to real-time updates to receive API updates with Webhooks. Armed with the proper infrastructure, you can have an almost real-time data feed into your repository and ensure that it will always be up to date with the latest bit of data. Facebook Ads is a real-time bidding system where advertisers can compete to showcase their advertising material.
Facebook Ads imparts a very rich API that gives you the opportunity to get extremely granular data regarding your accounting activities and leverage it for reporting and analytic purposes. This richness will cost you, though many complex resources must be tackled with an equally intricate protocol.
Prepare Your Facebook Ads Data for Google BigQuery
Before diving into the methods that can be deployed to set up a connection from Facebook Ads to BigQuery, you should ensure that it is furnished in an appropriate format. For instance, if the API you pull data from returns an XML file, you would first have to transform it to a serialization that can be understood by BigQuery. As of now, the following two data formats are supported:
JSON
CSV
Apart from this, you also need to ensure that the data types you leverage are the ones supported by Google BigQuery, which are as follows:
FLOAT
RECORD
TIMESTAMP
INTEGER
FLOAT
STRING
Additional Resources on Facebook Ads To Bigquery
Explore how to Load Data into Bigquery
Conclusion
This blog talks about the 3 different methods you can use to move data from Facebook Ads to BigQuery in a seamless fashion.
It also provides information on the limitations of using the manual methods and use cases of integrating Facebook ads data to BigQuery.
FAQ about Facebook Ads to Google BigQuery
How do I get Facebook data into BigQuery?
To get Facebook data into BigQuery you can use one of the following methods:1. Use ETL Tools2. Google Cloud Data Transfer Service3. Run Custom Scripts4. Manual CSV Upload
How do I integrate Google Ads to BigQuery?
Google Ads has a built-in connector in BigQuery. To use it, go to your BigQuery console, find the data transfer service, and set up a new transfer from Google Ads.
How to extract data from Facebook ads?
To extract data from Facebook ads, you can use the Facebook Ads API or third-party ETL tools like LIKE.TG Data.
Do you have any experience in working with moving data from Facebook Ads to BigQuery? Let us know in the comments section below.

API to BigQuery: 2 Preferred Methods to Load Data in Real time
Many businesses today use a variety of cloud-based applications for day-to-day business, like Salesforce, HubSpot, Mailchimp, Zendesk, etc. Companies are also very keen to combine this data with other sources to measure key metrics that help them grow.Given most of the cloud applications are owned and run by third-party vendors – the applications expose their APIs to help companies extract the data into a data warehouse – say, Google BigQuery. This blog details out the process you would need to follow to move data from API to BigQuery.
Besides learning about the data migration process from rest API to BigQuery, we’ll also learn about their shortcomings and the workarounds. Let’s get started.
Note: When you connect API to BigQuery, consider factors like data format, update frequency, and API rate limits to design a stable integration.
Load Data from REST API to BigQueryGet a DemoTry itLoad Data from Salesforce to BigQueryGet a DemoTry itLoad Data from Webhooks to BigQueryGet a DemoTry it
Method 1: Loading Data from API to BigQuery using LIKE.TG Data
LIKE.TG is the only real-time ELT No-code Data Pipeline platform that cost-effectively automates data pipelines that are flexible to your needs. With integration with 150+ Data Sources (40+ free sources), we help you not only export data from sources load data to the destinations but also transform enrich your data, make it analysis-ready.
Here are the steps to move data from API to BigQuery using LIKE.TG :
Step 1: Configure REST API as your source
ClickPIPELINESin theNavigation Bar.
Click+ CREATEin thePipeline List View.
In theSelect Source Typepage, selectREST API.
In theConfigure your REST API Sourcepage:
Specify a uniquePipeline Name, not exceeding 255 characters.
Set up your REST API Source.
Specify the data root, or the path,from where you want LIKE.TG to replicate the data.
Select the pagination methodto read through the API response. Default selection:No Pagination.
Step 2: Configure BigQuery as your Destination
ClickDESTINATIONSin theNavigation Bar.
Click+ CREATEin theDestinations List View.
InAdd Destinationpage selectGoogle BigQueryas the Destination type.
In theConfigure your Google BigQuery Warehousepage, specify the following details:
Yes, that is all. LIKE.TG will do all the heavy lifting to ensure that your analysis-ready data is moved to BigQuery, in a secure, efficient, and reliable manner.
To know in detail about configuring REST API as your source, refer to LIKE.TG Documentation.
Sign Up for a 14-day free trial and experience the feature-rich LIKE.TG suite firsthand.
Method 2: API to BigQuery ETL Using Custom Code
The BigQuery Data Transfer Service provides a way to schedule and manage transfers from REST API datasource to Bigquery for supported applications.
One advantage of the REST API to Google BigQuery is the ability to perform actions (like inserting data or creating tables) that might not be directly supported by the web-based BigQuery interface. The steps involved in migrating data from API to BigQuery are as follows:
Getting your data out of your application using API
Preparing the data that was extracted from the Application
Loading data into Google BigQuery
Step 1: Getting data out of your application using API
Below are the steps to extract data from the application using API.
Get the API URL from where you need to extract the data. In this article, you will learn how to use Python to extract data from ExchangeRatesAPI.io which is a free service for current and historical foreign exchange rates published by the European Central Bank. The same method should broadly work for any API that you would want to use.
API URL = https://api.exchangeratesapi.io/latest?symbols=USD,GBP. If you click on the URL you will get below result:
{ "rates":{ "USD":1.1215, "GBP":0.9034 }, "base":"EUR", "date":"2019-07-17" }
Reading and Parsing API response in Python:
a. To handle API response will need two important libraries
import requests import json
b. Connect to the URL and get the response
url = "https://api.exchangeratesapi.io/latest?symbols=USD,GBP" response = requests.get(url)
c. Convert string to JSON format
parsed = json.loads(data)
d. Extract data and print
date = parsed["date"] gbp_rate = parsed["rates"]["GBP"] usd_rate = parsed["rates"]["USD"]
Here is the complete code:
import requests
import json
url = "https://api.exchangeratesapi.io/latest?symbols=USD,GBP"
response = requests.get(url)
data = response.text
parsed = json.loads(data)
date = parsed["date"]
gbp_rate = parsed["rates"]["GBP"]
usd_rate = parsed["rates"]["USD"]
print("On " + date + " EUR equals " + str(gbp_rate) + " GBP")
print("On " + date + " EUR equals " + str(usd_rate) + " USD")
Step 2: Preparing data received from API
There are two ways to load data to BigQuery.
You can save the received JSON formated data on JSON file and then load into BigQuery.
You can parse the JSON object, convert JSON to dictionary object and then load into BigQuery.
Step 3: Loading data into Google BigQuery
We can load data into BigQuery directly using API call or can create CSV file and then load into BigQuery table.
Create a Python script to extract data from API URL and load (UPSERT mode) into BigQuery table.Here UPSERT is nothing but Update and Insert operations. This means – if the target table has matching keys then update data, else insert a new record.
import requests
import json from google.cloud
import bigquery url = "https://api.exchangeratesapi.io/latest?symbols=USD,GBP"
response = requests.get(url)
data = response.text
parsed = json.loads(data)
base = parsed["base"]
date = parsed["date"]
client = bigquery.Client()
dataset_id = 'my_dataset'
table_id = 'currency_details'
table_ref = client.dataset(dataset_id).table(table_id)
table = client.get_table(table_ref) for key, value in parsed.items(): if type(value) is dict: for currency, rate in value.items(): QUERY = ('SELECT target_currency FROM my_dataset.currency_details where currency=%', currency) query_job = client.query(QUERY) if query_job == 0: QUERY = ('update my_dataset.currency_details set rate = % where currency=%',rate, currency) query_job = client.query(QUERY) else: rows_to_insert = [ (base, currency, 1, rate) ] errors = client.insert_rows(table, rows_to_insert) assert errors == []
Load JSON file to BigQuery. You need to save the received data in JSON file and load JSON file to BigQuery table.
import requests import json from google.cloud import bigquery url = "https://api.exchangeratesapi.io/latest?symbols=USD,GBP" response = requests.get(url) data = response.text parsed = json.loads(data) for key, value in parsed.items(): if type(value) is dict: with open('F:Pythondata.json', 'w') as f: json.dump(value, f) client = bigquery.Client(project="analytics-and-presentation") filename = 'F:Pythondata.json' dataset_id = ‘my_dayaset’' table_id = 'currency_rate_details' dataset_ref = client.dataset(dataset_id) table_ref = dataset_ref.table(table_id) job_config = bigquery.LoadJobConfig() job_config.source_format = bigquery.SourceFormat.NEWLINE_DELIMITED_JSON job_config.autodetect = True with open(filename, "rb") as source_file: job = client.load_table_from_file(source_file, table_ref, job_config=job_config) job.result() # Waits for table load to complete. print("Loaded {} rows into {}:{}.".format(job.output_rows, dataset_id, table_id))
Limitations of writing custom scripts and developing ETL to load data from API to BigQuery
The above code is written based on the current source as well as target destination schema. If the data coming in is either from the source or the schema on BigQuery changes, ETL process will break.
In case you need to clean your data from API – say transform time zones, hide personally identifiable information and so on, the current method does not support it. You will need to build another set of processes to accommodate that. Clearly, this would also need you to invest extra effort and money.
You are at a serious risk of data loss if at any point your system breaks. This could be anything from source/destination not being reachable to script breaks and more. You would need to invest upfront in building systems and processes that capture all the fail points and consistently move your data to the destination.
Since Python is an interpreted language, it might cause performance issue to extract from API and load data into BigQuery api.
For many APIs, we would need to supply credentials to access API. It is a very poor practice to pass credentials as a plain text in Python script. You will need to take additional steps to ensure your pipeline is secure.
API to BigQuery: Use Cases
Advanced Analytics: BigQuery has powerful data processing capabilities that enable you to perform complex queries and data analysis on your API data. This way, you can extract insights that would not be possible within API alone.
Data Consolidation: If you’re using multiple sources along with API, syncing them to BigQuery can help you centralize your data. This provides a holistic view of your operations, and you can set up a change data capture process to avoid discrepancies in your data.
Historical Data Analysis: API has limits on historical data. However, syncing your data to BigQuery allows you to retain and analyze historical trends.
Scalability: BigQuery can handle large volumes of data without affecting its performance. Therefore, it’s an ideal solution for growing businesses with expanding API data.
Data Science and Machine Learning: You can apply machine learning models to your data for predictive analytics, customer segmentation, and more by having API data in BigQuery.
Reporting and Visualization: While API provides reporting tools, data visualization tools like Tableau, PowerBI, and Looker (Google Data Studio) can connect to BigQuery, providing more advanced business intelligence options. If you need to convert an API table to a BigQuery table, Airbyte can do that automatically.
Additional Resources on API to Bigquery
Read more on how to Load Data into Bigquery
Conclusion
From this blog, you will understand the process you need to follow to load data from API to BigQuery. This blog also highlights various methods and their shortcomings. Using these two methods you can move data from API to BigQuery. However, using LIKE.TG , you can save a lot of your time!
Move data effortlessly with LIKE.TG ’s zero-maintenance data pipelines, Get a demo that’s customized to your unique data integration challenges
You can also have a look at the unbeatable LIKE.TG Pricing that will help you choose the right plan for your business needs!
FAQ on API to BigQuery
How to connect API to BigQuery?
1. Extracting data out of your application using API2. Transform and prepare the data to load it into BigQuery.3. Load the data into BigQuery using a Python script.4. Apart from these steps, you can also use automated data pipeline tools to connect your API url to BigQuery.
Is BigQuery an API?
BigQuery is a fully managed, serverless data warehouse that allows you to perform SQL queries. It provides an API for programmatic interaction with the BigQuery service.
What is the BigQuery data transfer API?
The BigQuery Data Transfer API offers a wide range of support, allowing you to schedule and manage the automated data transfer to BigQuery from many sources. Whether your data comes from YouTube, Google Analytics, Google Ads, or external cloud storage, the BigQuery Data Transfer API has you covered.
How to input data into BigQuery?
Data can be inputted into BigQuery via the following methods.1. Using Google Cloud Console to manually upload CSV, JSON, Avro, Parquet, or ORC files.2. Using the BigQuery CLI3. Using client libraries in languages like Python, Java, Node.js, etc., to programmatically load data.4. Using data pipeline tools like LIKE.TG
What is the fastest way to load data into BigQuery?
The fastest way to load data into BigQuery is to use automated Data Pipeline tools, which connect your source to the destination through simple steps. LIKE.TG is one such tool.

How to Connect Data from MongoDb to BigQuery in 2 Easy Methods
MongoDB is a popular NoSQL database that requires data to be modeled in JSON format. If your application’s data model has a natural fit to MongoDB’s recommended data model, it can provide good performance, flexibility, and scalability for transaction types of workloads. However, due to a few restrictions that you can face while analyzing data, it is highly recommended to stream data from MongoDB to BigQuery or any other data warehouse.
MongoDB doesn’t have proper join, getting data from other systems to MongoDB will be difficult, and it also has no native support for SQL. MongoDB’s aggregation framework is not as easy to draft complex analytics logic as in SQL.
The article provides steps to migrate data from MongoDB to BigQuery. It also talks about LIKE.TG Data, making it easier to replicate data. Therefore, without any further ado, let’s start learning about this MongoDB to BigQuery ETL.
What is MongoDB?
MongoDB is a popular NoSQL database management system known for its flexibility, scalability, and ease of use. It stores data in flexible, JSON-like documents, making it suitable for handling a variety of data types and structures.
MongoDB is commonly used in modern web applications, data analytics, real-time processing, and other scenarios where flexibility and scalability are essential.
What is BigQuery?
BigQuery is a fully managed, serverless data warehouse and analytics platform provided by Google Cloud. It is designed to handle large-scale data analytics workloads and allows users to run SQL-like queries against multi-terabyte datasets in a matter of seconds.
BigQuery supports real-time data streaming for analysis, integrates with other Google Cloud services, and offers advanced features like machine learning integration, data visualization, and data sharing capabilities.
Prerequisites
mongoexport (for exporting data from MongoDB)
a BigQuery dataset
a Google Cloud Platform account
LIKE.TG free-trial account
Methods to move Data from MongoDB to BigQuery
Method 1: Using LIKE.TG Data to Set up MongoDB to BigQuery
Method 2: Manual Steps to Stream Data from MongoDB to BigQuery
Method 1: Using LIKE.TG Data to Set up MongoDB to BigQuery
Sync your Data from MongoDB to BigQueryGet a DemoTry itSync your Data from HubSpot to BigQueryGet a DemoTry itSync your Data from Google Ads to BigQueryGet a DemoTry itSync your Data from Google Analytics 4 to BigQueryGet a DemoTry it
Step 1: Select the Source Type
To selectMongoDBas the Source:
ClickPIPELINESin theAsset Palette.
Click+ CREATEin thePipelines List View.
In theSelect Source Typepage, select theMongoDBvariant.
Step 2: Select theMongoDBVariant
Select theMongoDBservice provider that you use to manage yourMongoDBdatabases:
Generic Mongo Database: Database management is done at your end, or by a service provider other thanMongoDBAtlas.
MongoDBAtlas: The managed database service fromMongoDB.
Step 3: SpecifyMongoDBConnection Settings
Refer to the following sections based on yourMongoDBdeployment:
GenericMongoDB.
MongoDBAtlas.
In theConfigure your MongoDB Sourcepage, specify the following:
Step 4: Configure BigQuery Connection Settings
Now Select Google BigQuery as your destination and start moving your data.
You can modify only some of the settings you provide here once the Destination is created. Refer to the sectionModifyingBigQuery Destination Configurationbelow for more information.
ClickDESTINATIONSin theAsset Palette.
Click+ CREATEin theDestinations List View.
Inthe Add Destinationpage, selectGoogleBigQueryas the Destination type.
In theConfigure your GoogleBigQuery Accountpage, select the authentication method for connecting toBigQuery.
In theConfigure your GoogleBigQuery Warehousepage, specify the following details.
By following the above mentioned steps, you will have successfully completed MongoDB BigQuery replication.
With continuous Real-Time data movement, LIKE.TG allows you to combine MongoDB data with your other data sources and seamlessly load it to BigQuery with a no-code, easy-to-setup interface. Try our 14-day full-feature access free trial!
Method 2: Manual Steps to Stream Data from MongoDB to BigQuery
For the manual method, you will need some prerequisites, like:
MongoDB environment: You should have a MongoDB account with a dataset and collection created in it.
Tools like MongoDB compass and tool kit should be installed on your system.
You should have access to MongoDB, including the connection string required to establish a connection using the command line.
Google Cloud Environment
Google Cloud SDK
A Google Cloud project created with billing enabled
Google Cloud Storage Bucket
BigQuery API Enabled
After meeting these requirements, you can manually export your data from MongoDB to BigQuery. Let’s get started!
Step 1: Extract Data from MongoDB
For the first step, you must extract data from your MongoDB account using the command line. To do this, you can use the mongoexport utility. Remember that mongoexport should be directly run on your system’s command-line window.
An example of a command that you can give is:
mongoexport --uri="mongodb+srv://username:[email protected]/database_name" --collection=collection_name --out=filename.file_format --fields="field1,field2…"
Note:
‘username: password’ is your MongoDB username and password.
‘Cluster_name’ is the name of the cluster you created on your MongoDB account. It contains the database name (database_name) that contains the data you want to extract.
The ‘–collection’ is the name of the table that you want to export.
‘–out=Filename.file_format’ is the file’s name and format in which you want to extract the data. For example, Comments.csv, the file with the extracted data, will be stored as a CSV file named comments.
‘– fields’ is applicable if you want to extract data in a CSV file format.
After running this command, you will get a message like this displayed on your command prompt window:
Connected to:mongodb+srv://[**REDACTED**]@cluster-name.gzjfolm.mongodb.net/database_name
exported n records
Here, n is just an example. When you run this command, it will display the number of records exported from your MongoDB collection.
Step 2: Optional cleaning and transformations
This is an optional step, depending on the type of data you have exported from MongoDB. When preparing data to be transferred from MongoDB to BigQuery, there are a few fundamental considerations to make in addition to any modifications necessary to satisfy your business logic.
BigQuery processes UTF-8 CSV data. If your data is encoded in ISO-8859-1 (Latin-1), then you should specify that while loading it to BigQuery.
BigQuery doesn’t enforce Primary key or Unique key Constraints, and the ETL (Extract, Transform, and Load) process should take care of that.
Date values should be in the YYYY-MM-DD (Year-month-date) format and separated by dashes.
Also, both platforms have different column types, which should be transformed for consistent and error-free data transfer.A few data types and their equivalents in BigQuery are as follows:
These are just a few transformations you need to consider. Make the necessary translations before you load data to BigQuery.
Step 3: Uploading data to Google Cloud Storage (GCS)
After transforming your data, you must upload it to Google Cloud storage. The easiest way to do this is through your Google Cloud Web console.
Login to your Google Cloud account and search for Buckets. Fill in the required fields and click Create.
After creating the bucket, you will see your bucket listed with the rest. Select your bucket and click on the ‘upload files’ option.
Select the file you exported from MongoDB in Step 1. Your MongoDB data is now uploaded to Google Cloud Storage.
Step 4: Upload Data Extracted from MongoDB to BigQuery Table from GCS
Now, from the left panel of Google Cloud, select BigQuery and select the project you are working on. Click on the three dots next to it and click ‘Create Dataset.’
Fill in all the necessary information and click the ‘Create Dataset’ button at the bottom. You have now created a dataset to store your exported data in.
Now click on the three dots next to the dataset name you just created. Let’s say I created the dataset called mongo_to_bq. Select the ‘Create table’ option.
Now, select the ‘Google Cloud Storage’ option and click the ‘browse’ option to select the dataset you created(mongo_to_bq).
Fill in the rest of the details and click ‘Create Table’ at the bottom of the page.
Now, your data has been transferred from MongoDB to BigQuery.
Step 5: Verify Data Integrity
After loading the data to BigQuery, it is essential to verify that the same data from MongoDB has been transferred and that no missing or corrupted data is loaded to BigQuery. To verify the data integrity, run some SQL queries in BigQuery UI and compare the records fetched as their result with your original MongoDB data to ensure correctness and completeness.
Example: To find the locations of all the theaters in a dataset called “Theaters,” we can run the following query.
Learn more about:
MongoDB data replication
Limitations of Manually Moving Data from MongoDB to BigQuery
The following are some possible drawbacks when data is streamed from MongoDB to BigQuery manually:
Time-Consuming: Compared to automated methods, manually exporting MongoDB data, transferring it to Cloud Storage, and then importing it into BigQuery is inefficient. Every time fresh data enters MongoDB, this laborious procedure must be repeated.
Potential for human error: There is a chance that data will be wrongly exported, uploaded to the wrong place, badly converted, or loaded to the wrong table or partition if error-prone manual procedures are followed at every stage.
Data lags behind MongoDB: The data in BigQuery might not be current with the most recent inserts and changes in the MongoDB database due to the manual process’s latency. Recent modifications may be overlooked in important analyses.
Difficult to incrementally add new data: When opposed to automatic streaming, which manages this effectively, adding just new or modified MongoDB entries manually is difficult.
Hard to reprocess historical data: It would be necessary to manually export historical data from MongoDB and reload it into BigQuery if any problems were discovered in the datasets that were previously imported.
No error handling: Without automated procedures to detect, manage, and retry mistakes and incorrect data, problems like network outages, data inaccuracies, or restrictions violations may arise.
Scaling limitations: MongoDB’s exporting, uploading, and loading processes don’t scale properly and become increasingly difficult as data sizes increase.
The constraints drive the requirement for automated MongoDB to BigQuery replication to create more dependable, scalable, and resilient data pipelines.
MongoDB to BigQuery: Use Cases
Streaming data from MongoDB to BigQuery may be very helpful in the following frequent use cases:
Business analytics: Analysts may use BigQuery’s quick SQL queries, sophisticated analytics features, and smooth interaction with data visualization tools like Data Studio by streaming MongoDB data into BigQuery. This can lead to greater business insights.
Data warehousing: By streaming data from MongoDB and merging it with data from other sources, businesses may create a cloud data warehouse on top of BigQuery, enabling corporate reporting and dashboards.
Log analysis: BigQuery’s columnar storage and massively parallel processing capabilities enable the streaming of server, application, and clickstream logs from MongoDB databases for large-scale analytics.
Data integration: By streaming to BigQuery as a centralised analytics data centre, businesses using MongoDB for transactional applications may integrate and analyse data from their relational databases, customer relationship management (CRM) systems, and third-party sources.
Machine Learning: Streaming data from production MongoDB databases may be utilized to train ML models using BigQuery ML’s comprehensive machine learning features.
Cloud migration: By gradually streaming data, move analytics from on-premises MongoDB to Google Cloud’s analytics and storage services.
Additional Read –
Stream data from mongoDB Atlas to BigQuery
Move Data from MongoDB to MySQL
Connect MongoDB to Snowflake
Move Data from MongoDB to Redshift
MongoDB Atlas to BigQuery
Conclusion
This blog makes migrating from MongoDB to BigQuery an easy everyday task for you! The methods discussed in this blog can be applied so that business data in MongoDB and BigQuery can be integrated without any hassle through a smooth transition, with no data loss or inconsistencies.
Sign up for a 14-day free trial with LIKE.TG Data to streamline your migration process and leverage multiple connectors, such as MongoDB and BigQuery, for real-time analysis!
FAQ on MongoDB To BigQuery
What is the difference between BigQuery and MongoDB?
BigQuery is a fully managed data warehouse for large-scale data analytics using SQL. MongoDB is a NoSQL database optimized for storing unstructured data with high flexibility and scalability.
How do I transfer data to BigQuery?
Use tools like Google Cloud Dataflow, BigQuery Data Transfer Service, or third-party ETL tools like LIKE.TG Data for a hassle-free process.
Is BigQuery SQL or NoSQL?
BigQuery is an SQL database designed to run fast, complex analytical queries on large datasets.
What is the difference between MongoDB and Oracle DB?
MongoDB is a NoSQL database optimized for unstructured data and flexibility. Oracle DB is a relational database (RDBMS) designed for structured data, complex transactions, and strong consistency.

A List of The 19 Best ETL Tools And Why To Choose Them in 2024
As data continues to grow in volume and complexity, the need for an efficient ETL tool becomes increasingly critical for a data professional. ETL tools not only streamline the process of extracting data from various sources but also transform it into a usable format and load it into a system of your choice. This ensures both data accuracy and consistency.This is why, in this blog, we’ll introduce you to the top 20 ETL tools to consider in 2024. We’ll walk through the key features, use cases, and pricing for every tool to give you a clear picture of what is available in the market. Let’s dive in!
What is ETL, and what is its importance?
The essential data integration procedure known as extract, transform, and load, or ETL, aims to combine data from several sources into a single, central repository. The process entails gathering data, cleaning and reforming it by common business principles, and loading it into a database or data warehouse.
Extract: This step involves data extraction from various source systems, such as databases, files, APIs, or other data repositories. The extracted data may be structured, semi-structured, or unstructured.
Transform: During this step, the extracted data is transformed into a suitable format for analysis and reporting. This includes cleaning, filtering, aggregating, and applying business rules to ensure accuracy and consistency.
Load: This includes loading the transformed data into a target data warehouse, database, or other data repository, where it can be used for querying and analysis by end-users and applications.
Using ETL operations, you can analyze raw datasets in the appropriate format required for analytics and gain insightful knowledge. This makes work more straightforward when researching demand trends, changing customer preferences, keeping up with the newest styles, and ensuring regulations are followed.
Criteria for choosing the right ETL Tool
Choosing the right ETL tool for your company is crucial. These tools automate the data migration process, allowing you to schedule integrations in advance or execute them live. This automation frees you from tedious tasks like data extraction and import, enabling you to focus on more critical tasks. To help you make an informed decision, learn about some of the popular ETL solutions available in the market.
Cost: Organizations selecting an ETL tool should consider not only the initial price but also the long-term costs of infrastructure and labor. An ETL solution with higher upfront costs but lower maintenance and downtime may be more economical. Conversely, free, open-source ETL tools might require significant upkeep.
Usability: The tool should be intuitive and easy to use, allowing technical and non-technical users to navigate and operate it with minimal training. Look for interfaces that are clean, well-organized, and visually appealing.
Data Quality: The tool should provide robust data cleansing, validation, and transformation capabilities to ensure high data quality. Effective data quality management leads to more accurate and reliable analysis.
Performance: The tool should be able to handle large data volumes efficiently. Performance benchmarks and scalability options are critical, especially as your data needs grow.
Compatibility: Ensure the ETL tool supports various data sources and targets, including databases, cloud services, and data warehouses. Compatibility with multiple data environments is crucial for seamless integration.
Support and Maintenance: The level of support the vendor provides, including technical support, user forums, and online resources, should be evaluated. Reliable support is essential for resolving issues quickly and maintaining smooth operations.
Best ETL Tools of 2024
1. LIKE.TG Data
LIKE.TG Data is one of the most highly rated ELT platforms that allows teams to rely on timely analytics and data-driven decisions. You can replicate streaming data from 150+ Data Sources, including BigQuery, Redshift, etc., to the destination of your choice without writing a single line of code. The platform processes 450 billion records and supports dynamic scaling of workloads based on user requirements. LIKE.TG ’s architecture ensures the optimal usage of system resources to get the best return on your investment. LIKE.TG ’s intuitive user interface caters to more than 2000 customers across 45 countries.
Key features:
Data Streaming: LIKE.TG Data supports real-time data streaming, enabling businesses to ingest and process data from multiple sources in real-time. This ensures that the data in the target systems is always up-to-date, facilitating timely insights and decision-making.
Reliability: LIKE.TG provides robust error handling and data validation mechanisms to ensure data accuracy and consistency. Any errors encountered during the ETL process are logged and can be addressed promptly.
Cost-effectiveness: LIKE.TG offers transparent and straightforward pricing plans that cater to businesses of all sizes. The pricing is based on the volume of data processed, ensuring that businesses only pay for what they use.
Use cases:
Real-time data integration and analysis
Customer data integration
Supply chain optimization
Pricing:
LIKE.TG provides the following pricing plan:
Free
Starter- $239/per month
Professional- $679/per month
Business Critical- Contact sales
LIKE.TG : Your one-stop shop for everything ETL
Stop wasting time evaluating countless ETL tools. Pick LIKE.TG for its transparent pricing, auto schema mapping, in-flight transformation and other amazing features.
Get started with LIKE.TG today
2. Informatica PowerCenter
Informatica PowerCenter is a common data integration platform widely used for enterprise data warehousing and data governance. PowerCenter’s powerful capabilities enable organizations to integrate data from different sources into a consistent, accurate, and accessible format. PowerCenter is built to manage complicated data integration jobs. Informatica uses integrated, high-quality data to power business growth and enable better-informed decision-making.
Key Features:
Role-based: Informatica’s role-based tools and agile processes enable businesses to deliver timely, trusted data to other companies.
Collaboration: Informatica allows analysts to collaborate with IT to prototype and validate results rapidly and iteratively.
Extensive support: Support for grid computing, distributed processing, high availability, adaptive load balancing, dynamic partitioning, and pushdown optimization
Use cases:
Data integration
Data quality management
Master data management
Pricing:
Informatica supports volume-based pricing. It also offers a free plan and three different paid plans for cloud data management.
3. AWS Glue
AWS Glue is a serverless data integration platform that helps analytics users discover, move, prepare, and integrate data from various sources. It can be used for analytics, application development, and machine learning. It includes additional productivity and data operations tools for authoring, running jobs, and implementing business workflows.
Key Features:
Auto-detect schema: AWS Glue uses crawlers that automatically detect and integrate schema information into the AWS Glue Data Catalog.
Transformations: AWS Glue visually transforms data with a job canvas interface
Scalability: AWS Glue supports dynamic scaling of resources based on workloads
Use cases:
Data cataloging
Data lake ingestion
Data processing
Pricing:
AWS Glue supports plans based on hourly rating, billed by the second, for crawlers (discovering data) and extract, transform, and load (ETL) jobs (processing and loading data).
4. IBM DataStage
IBM DataStage is an industry-leading data integration tool that helps you design, develop, and run jobs that move and transform data. At its core, the DataStage tool mainly helps extract, transform, and load (ETL) and extract, load, and transform (ELT) patterns.
Key features:
Data flows: IBM DataStage helps design data flows that extract information from multiple source systems, transform the data as required, and deliver the data to target databases or applications.
Easy connect: It helps connect directly to enterprise applications as sources or targets to ensure the data is complete, relevant, and accurate.
Time and consistency: It helps reduce development time and improves the consistency of design and deployment by using prebuilt functions.
Use cases:
Enterprise Data Warehouse Integration
ETL process
Big Data Processing
Pricing:
IBM DataStage’s pricing model is based on capacity unit hours. It also supports a free plan for small data.
5. Azure Data Factory
Azure Data Factory is a serverless data integration software that supports a pay-as-you-go model that scales to meet computing demands. The service offers no-code and code-based interfaces and can pull data from over 90 built-in connectors. It is also integrated with Azure Synapse analytics, which helps perform analytics on the integrated data.
Key Features
No-code pipelines: Provide services to develop no-code ETL and ELT pipelines with built-in Git and support for continuous integration and delivery (CI/CD).
Flexible pricing: Supports a fully managed, pay-as-you-go serverless cloud service that supports auto-scaling on the user’s demand.
Autonomous support: Supports autonomous ETL to gain operational efficiencies and enable citizen integrators.
Use cases
Data integration processes
Getting data to an Azure data lake
Data migrations
Pricing:
Azure Data Factory supports free and paid pricing plans based on user’s requirements. Their plans include:
Lite
Standard
Small Enterprise Bundle
Medium Enterprise Bundle
Large Enterprise Bundle
DataStage
6. Google Cloud DataFlow
Google Cloud Dataflow is a fully optimized data processing service built to enhance computing power and automate resource management. The service aims to lower processing costs by automatically scaling resources to meet demand and offering flexible scheduling. Furthermore, when the data is transformed, Google Cloud Dataflow provides AI capabilities to identify real-time anomalies and perform predictive analysis.
Key Features:
Real-time AI: Dataflow supports real-time AI capabilities, allowing real-time reactions with near-human intelligence to various events.
Latency: Dataflow helps minimize pipeline latency, maximize resource utilization, and reduce processing cost per data record with data-aware resource autoscaling.
Continuous Monitoring: This involves monitoring and observing the data at each step of a Dataflow pipeline to diagnose problems and troubleshoot effectively using actual data samples.
Use cases:
Data movement
ETL workflows
Powering BI dashboards
Pricing:
Google Cloud Dataflow uses a pay-as-you-go pricing model that provides flexibility and scalability for data processing tasks.
7. Stitch
Stitch is a cloud-first, open-source platform for rapidly moving data. It is a service for integrating data that gathers information from more than 130 platforms, services, and apps. The program centralized this data in a data warehouse, eliminating the need for manual coding. Stitch is open-source, allowing development teams to extend the tool to support additional sources and features.
Key Features:
Flexible schedule: Stitch provides easy scheduling of when you need the data replicated.
Fault tolerance: Resolves issues automatically and alerts users when required in case of detected errors
Continuous monitoring: Monitors the replication process with detailed extraction logs and loading reports
Use cases:
Data warehousing
Real-time data replication
Data migration
Pricing:
Stitch provides the following pricing plan:
Standard-$100/ month
Advanced-$1250 annually
Premium-$2500 annually
8. Oracle data integrator
Oracle Data Integrator is a comprehensive data integration platform covering all data integration requirements:
High-volume, high-performance batch loads
Event-driven, trickle-feed integration processes
SOA-enabled data services
In addition, it has built-in connections with Oracle GoldenGate and Oracle Warehouse Builder and allows parallel job execution for speedier data processing.
Key Features:
Parallel processing: ODI supports parallel processing, allowing multiple tasks to run concurrently and enhancing performance for large data volumes.
Connectors: ODI provides connectors and adapters for various data sources and targets, including databases, big data platforms, cloud services, and more. This ensures seamless integration across diverse environments.
Transformation: ODI provides Advanced Data Transformation Capabilities
Use cases:
Data governance
Data integration
Data warehousing
Pricing:
Oracle data integrator provides service prices at the customer’s request.
9. Integrate.io
Integrate.io is a leading low-code data pipeline platform that provides ETL services to businesses. Its constantly updated data offers insightful information for the organization to make decisions and perform activities like lowering its CAC, increasing its ROAS, and driving go-to-market success.
Key Features:
User-Friendly Interface: Integrate.io offers a low-code, simple drag-and-drop user interface and transformation features – like sort, join, filter, select, limit, clone, etc. —that simplify the ETL and ELT process.
API connector: Integrate.io provides a REST API connector that allows users to connect to and extract data from any REST API.
Order of action: Integrate.io’s low-code and no-code workflow creation interface allows you to specify the order of actions to be completed and the circumstances under which they should be completed using dropdown choices.
Use cases:
CDC replication
Supports slowly changing dimension
Data transformation
Pricing:
Integrate.io provides four elaborate pricing models such as:
Starter-$2.99/credit
Professional-$0.62/credit
Expert-$0.83/credit
Business Critical-custom
10. Fivetran
Fivetran’s platform of valuable tools is designed to make your data management process more convenient. Within minutes, the user-friendly software retrieves the most recent information from your database, keeping up with API updates. In addition to ETL tools, Fivetran provides database replication, data security services, and round-the-clock support.
Key Features:
Connectors: Fivetran makes data extraction easier by maintaining compatibility with hundreds of connectors.
Automated data cleaning: Fivetran automatically looks for duplicate entries, incomplete data, and incorrect data, making the data-cleaning process more accessible for the user.
Data transformation: Fivetran’s feature makes analyzing data from various sources easier.
Use cases:
Streamline data processing
Data integration
Data scheduling
Pricing:
Fivetran offers the following pricing plans:
Free
Starter
Standard
Enterprise
Solve your data replication problems with LIKE.TG ’s reliable, no-code, automated pipelines with 150+ connectors.Get your free trial right away!
11. Pentaho Data Integration (PDI)
Pentaho Data Integration(PDI) is more than just an ETL tool. It is a codeless data orchestration tool that blends diverse data sets into a single source of truth as a basis for analysis and reporting.
Users can design data jobs and transformations using the PDI client, Spoon, and then run them using Kitchen. For example, the PDI client can be used for real-time ETL with Pentaho Reporting.
Key Features:
Flexible Data Integration: Users can easily prepare, build, deploy, and analyze their data.
Intelligent Data Migration: Pentaho relies heavily on multi-cloud-based and hybrid architectures. By using Pentaho, you can accelerate your data movements across hybrid cloud environments.
Scalability: You can quickly scale out with enterprise-grade, secure, and flexible data management.
Flexible Execution Environments: PDI allows users to easily connect to and blend data anywhere, on-premises, or in the cloud, including Azure, AWS, and GCP. It also provides containerized deployment options—Docker and Kubernetes—and operationalizes Spark, R, Python, Scala, and Weka-based AI/ML models.
Accelerated Data Onboarding with Metadata Injection: It provides transformation templates for various projects that users can reuse to accelerate complex onboarding projects.
Use Cases:
Data Warehousing
Big Data Integration
Business Analytics
Pricing:
The software is available in a free community edition and a subscription-based enterprise edition. Users can choose one based on their needs.
12. Dataddo
Dataddo is a fully managed, no-code integration platform that syncs cloud-based services, dashboarding apps, data warehouses, and data lakes. It helps the users visualize, centralize, distribute, and activate data by automating its transfer from virtually any source to any destination. Dataddo’s no-code platform is intuitive for business users and robust enough for data engineers, making it perfect for any data-driven organization.
Key Features:
Certified and Fully Secure: Dataddo is SOC 2 Type II certified and compliant with all significant data privacy laws around the globe.
Offers various connectors: Dataddo offers 300+ off-the-shelf connectors, no matter your payment plan. Users can also request that the necessary connector be built if unavailable.
Highly scalable and Future-proof: Users can operate with any cloud-based tools they use now or in the future. They can use any connector from the ever-growing portfolio.
Store data without needing a warehouse: No data warehouse is necessary. Users can collect historical data in Dataddo’s embedded SmartCache storage.
Test Data Models Before Deploying at Full Scale: By sending their data directly to a dashboarding app, users can test the validity of any data model on a small scale before deploying it fully in a data warehouse.
Use Cases:
Marketing Data Integration(includes social media data connectors like Instagram, Facebook, Pinterest, etc.)
Data Analytics and Reporting
Pricing:
Offers various pricing models to meet user’s needs.
Free
Data to Dashboards- $99.0/mo
Data Anywhere- $99.0/mo
Headless Data Integration: Custom
13. Hadoop
Apache Hadoop is an open-source framework for efficiently storing and processing large datasets ranging in size from gigabytes to petabytes. Instead of using one large computer to store and process the data, Hadoop allows clustering multiple computers to analyze massive datasets in parallel more quickly. It offers four modules: Hadoop Distributed File System (HDFS), Yet Another Resource Negotiator (YARN), MapReduce, and Hadoop Common.
Key Features:
Scalable and cost-effective: Can handle large datasets at a lower cost.
Strong community support: Hadoop offers wide adoption and a robust community.
Suitable for handling massive amounts of data: Efficient for large-scale data processing.
Fault Tolerance is Available: Hadoop data is replicated on various DataNodes in a Hadoop cluster, which ensures data availability if any of your systems crash.
Best Use Cases:
Analytics and Big Data
Marketing Analytics
Risk management(In finance etc.)
Healthcare
Batch processing of large datasets
Pricing: Free
14. Qlik
Qlik’s Data Integration Platform automates real-time data streaming, refinement, cataloging, and publishing between multiple source systems and Google Cloud. It drives agility in analytics through automated data pipelines that provide real-time data streaming from the most comprehensive source systems (including SAP, Mainframe, RDBMS, Data Warehouse, etc.) and automates the transformation to analytics-ready data across Google Cloud.
Key Features:
Real-Time Data for Faster, Better Insights: Qlik delivers large volumes of real-time, analytics-ready data into streaming and cloud platforms, data warehouses, and data lakes.
Agile Data Delivery: Qlik enables the creation of analytics-ready data pipelines across multi-cloud and hybrid environments, automating data lakes, warehouses, and intelligent designs to reduce manual errors.
Enterprise-grade security and governance: Qlik helps users discover, remediate, and share trusted data with simple self-service tools to automate data processes and help ensure compliance with regulatory requirements.
Data Warehouse Automation: Qlik accelerates the availability of analytics-ready data by modernizing and automating the entire data warehouse life cycle.
Qlik Staige: Qlik’s AI helps customers to implement generative models, better inform business decisions, and improve outcomes.
Use Cases:
Business intelligence and analytics
Augmented analytics
Visualization and dashboard creation
Pricing:
It offers three pricing options to its users:
Stitch Data Loader
Qlik Data Integration
Talend Data Fabric
15. Airbyte
Airbyte is one of the best data integration and replication tools for setting up seamless data pipelines. This leading open-source platform offers a catalog of 350+ pre-built connectors. Although the catalog library is expansive, you can still build a custom connector to data sources and destinations not in the pre-built list. Creating a custom connector takes a few minutes because Airbyte makes the task easy.
Key Features:
Multiple Sources: Airbyte can easily consolidate numerous sources. You can quickly bring your datasets together at your chosen destination if your datasets are spread over various locations.
Massive variety of connectors: Airbyte offers 350+ pre-built and custom connectors.
Open Source: Free to use, and with open source, you can edit connectors and build new connectors in less than 30 minutes without needing separate systems.
It provides a version-control tool and options to automate your data integration processes.
Use Cases:
Data Engineering
Marketing
Sales
Analytics
AI
Pricing:
It offers various pricing models:
Open Source- Free
Cloud—It offers a free trial and charges $360/mo for a 30GB volume of data replicated per month.
Team- Talk to the sales team for the pricing details
Enterprise- Talk to the sales team for the pricing details
16. Portable.io
Portable builds custom no-code integrations, ingesting data from SaaS providers and many other data sources that might not be supported because other ETL providers overlook them. Potential customers can see their extensive connector catalog of over 1300+ hard-to-find ETL connectors. Portable enables efficient and timely data management and offers robust scalability and high performance.
Key Features:
Massive Variety of pre-built connectors: Bespoke connectors built and maintained at no cost.
Visual workflow editor: It provides a graphical interface that is simple to use to create ETL procedures.
Real-Time Data Integration: It supports real-time data updates and synchronization.
Scalability: Users can scale to handle larger data volumes as needed.
Use Cases:
High-frequency trading
Understanding supply chain bottlenecks
Freight tracking
Business Analytics
Pricing:
It offers three pricing models to its customers:
Starter: $290/mo
Scale: $1,490/mo
Custom Pricing
17. Skyvia
Skyvia is a Cloud-based web service that provides data-based solutions for integration, backup, management, and connectivity. Its areas of expertise include ELT and ETL (Extract, Transform, Load) import tools for advanced mapping configurations.
It provides wizard-based data integration throughout databases and cloud applications with no coding. It aims to help small businesses securely manage data from disparate sources with a cost-effective service.
Key Features:
Suitable for businesses of all sizes: Skyvia offers different pricing plans for businesses of various sizes and needs, and every company can find a suitable one.
Always available: Hosted in reliable Azure cloud and multi-tenant fault-tolerant cloud architecture, Skyvia is always online.
Easy access to on-premise data: Users can connect Skyvia to local data sources via a secure agent application without re-configuring the firewall, port forwarding, and other network settings.
Centralized payment management: Users can Control subscriptions and payments for multiple users and teams from one place. All the users within an account share the same pricing plans and their limits.
Workspace sharing: Skyvia’s flexible workspace structure allows users to manage team communication, control access, and collaborate on integrations in test environments.
Use Cases:
Inventory Management
Data Integration and Visualization
Data Analytics
Pricing:
It Provides five pricing options to its users:
Free
Basic: $70/mo
Standard: $159/mo
Professional: $199/mo
Enterprise: Contact the team for pricing information.
18. Singer
Singer is an open-source standard for moving data between databases, web APIs, files, queues, etc. The Singer spec describes how data extraction scripts—called “Taps”—and data loading scripts—“Targets”—should communicate using a standard JSON-based data format over stdout. By conforming to this spec, Taps and Targets can be used in any combination to move data from any source to any destination.
Key Features:
Unix-inspired: Singer taps and targets are simple applications composed of pipes—no daemons or complicated plugins needed.
JSON-based: Singer applications communicate with JSON, making them easy to work with and implement in any programming language.
Efficient: Singer makes maintaining a state between invocations to support incremental extraction easy.
Sources and Destinations: Singer provides over 100 sources and has ten target destinations with all significant data warehouses, lakes, and databases as destinations.
Open Source platform: Singer.io is a flexible ETL tool that enables you to create scripts to transfer data across locations. You can create your own taps and targets or use those already there.
Use Cases:
Data Extraction and loading.
Custom Pipeline creation.
Pricing: Free
19. Matillion
Matillion is one of the best cloud-native ETL tools designed for the cloud. It can work seamlessly on all significant cloud-based data platforms, such as Snowflake, Amazon Redshift, Google BigQuery, Azure Synapse, and Delta Lake on Databricks. Matillion’s intuitive interface reduces maintenance and overhead costs by running all data jobs in the cloud.
Key Features:
ELT/ETL and reverse ETL
PipelineOS/Agents: Users can dynamically scale with Matillion’s PipelineOS, the operating system for your pipelines. Distribute individual pipeline tasks across multiple stateless containers to match the data workload and allocate only necessary resources.
High availability: By configuring high-availability Matillion clustered instances, users can keep Matillion running, even if components temporarily fail.
Multi-plane architecture: Easily manage tasks across multiple tenants, including access control, provisioning, and system maintenance.
Use Cases:
ETL/ELT/Reverse ETL
Streamline data operations
Change Data Capture
Pricing:
It provides three packages:
Basic- $2.00/credit
Advanced- $2.50/credit
Enterprise- $2.70/credit
20. Apache Airflow
Apache Airflow is an open-source platform bridging orchestration and management in complex data workflows. Originally designed to serve the requirements of Airbnb’s data infrastructure, it is now being maintained by the Apache Software Foundation. Airflow is one of the most used tools for data engineers, data scientists, and DevOps practitioners looking to automate pipelines related to data engineering.
Key Features:
Easy useability: Just a little knowledge of Python is required to deploy airflow.
Open Source: It is an open-source platform, making it free to use and resulting in many active users.
Numerous Integrations: Platforms like Google Cloud, Amazon AWS, and many more can be readily integrated using the available integrations.
Python for coding: beginner-level knowledge of Python is sufficient to create complex workflows on airflow.
User Interface: Airflow’s UI helps monitor and manage workflows.
Highly Scalable: Airflow can execute thousands of tasks per day simultaneously.
Use Cases:
Business Operations
ELT/ETL
Infrastructure Management
MLOps
Pricing: Free
Comparison of Top 20 ETL Tools
Future Trends in ETL Tools
Data Integration and Orchestration: The change from ETL to ELT is just one example of how the traditional ETL environment will change. To build ETL for the future, we need to focus on the data streams rather than the tools. We must account for real-time latency, source control, schema evolution, and continuous integration and deployment.
Automation and AI in ETL: Artificial intelligence and machine learning will no doubt dramatically change traditional ETL technologies within a few years. Solutions automate data transformation tasks, enhancing accuracy and reducing manual intervention in ETL procedures. Predictive analytics further empowers ETL solutions to project data integration challenges and develop better methods for improvement.
Real-time Processing: Yet another trend will move ETL technologies away from batch processing and towards introducing continuous data streams with real-time data processing technologies.
Cloud-Native ETL: Cloud-native ETL solutions will provide organizations with scale, flexibility, and cost savings. Organizations embracing serverless architectures will minimize administrative tasks on infrastructure and increase their focus on data processing agility.
Self-Service ETL: With the rise in automated ETL platforms, people with low/no technical knowledge can also implement ETL technologies to streamline their data processing. This will reduce the pressure on the engineering team to build pipelines and help businesses focus on performing analysis.
Conclusion
ETL pipelines form the foundation for organizations’ decision-making procedures. This step is essential to prepare raw data for storage and analytics. ETL solutions make it easier to do sophisticated analytics, optimize data processing, and promote end-user satisfaction. You must choose the best ETL tool to make your company’s most significant strategic decisions. Selecting the right ETL tool depends on your data integration needs, budget, and existing technology stack. The tools listed above represent some of the best options available in 2024, each with its unique strengths and features. Whether looking for a simple, no-code solution or a robust, enterprise-grade platform, an ETL tool on this list can meet your requirements and help you streamline your data integration process.
FAQ on ETL tools
What is ETL and its tools?
ETL stands for Extract, Transform, Load. It’s a process used to move data from one place to another while transforming it into a useful format. Popular ETL tools include:1. LIKE.TG Data: Robust, enterprise-level.2. Pentaho Data Integration: Open-source, user-friendly.3. Apache Nifi: Good for real-time data flows.4. AWS Glue: Serverless ETL service.
Is SQL an ETL tool?
Not really. SQL is a language for managing and querying databases. While you can use SQL for the transformation part of ETL, it’s not an ETL tool.
Which ETL tool is used most?
It depends on the use case, but popular tools include LIKE.TG Data, Apache Nifi, and AWS Glue.
What are ELT tools?
ELT stands for Extract, Load, Transform. It’s like ETL, but you load the data first and transform it into the target system. Tools for ELT include LIKE.TG Data, Azure Data Factory, Matillion, Apache Airflow, and IBM DataStage

MongoDB to Snowflake: 3 Easy Methods
var source_destination_email_banner = 'true';
Organizations often need to integrate data from various sources to gain valuable insights. One common scenario is transferring data from a NoSQL database like MongoDB to a cloud data warehouse like Snowflake for advanced analytics and business intelligence. However, this process can be challenging, especially for those new to data engineering. In this blog post, we’ll explore three easy methods to seamlessly migrate data from MongoDB to Snowflake, ensuring a smooth and efficient data integration process.
Mongodb realtime replication to Snowflake ensures that data is consistently synchronized between MongoDB and Snowflake databases. Due to MongoDB’s schemaless nature, it becomes important to move the data to a warehouse-like Snowflake for meaningful analysis.
In this article, we will discuss the different methods to migrate MongoDB to Snowflake.
Note: The MongoDB snowflake connector offers a solution for real-time data synchronization challenges many organizations face.
Methods to replicate MongoDB to Snowflake
There are three popular methods to perform MongoDB to Snowflake ETL:
Method 1: Using LIKE.TG Data to Move Data from MongoDB to Snowflake
LIKE.TG , an official Snowflake Partner for Data Integration, simplifies the process of data transfer from MongoDB to Snowflake for free with its robust architecture and intuitive UI. You can achieve data integration without any coding experience and absolutely no manual interventions would be required during the whole process after the setup.
GET STARTED WITH LIKE.TG FOR FREE
Method 2: Writing Custom Scripts to Move Data from MongoDB to Snowflake
This is a simple 4-step process to move data from MongoDB to Snowflake. It starts with extracting data from MongoDB collections and ends with copying staged files to the Snowflake table. This method of moving data from MongoDB to Snowflake has significant advantages but suffers from a few setbacks as well.
Method 3: Using Native Cloud Tools and Snowpipe for MongoDB to Snowflake
In this method, we’ll leverage native cloud tools and Snowpipe, a continuous data ingestion service, to load data from MongoDB into Snowflake. This approach eliminates the need for a separate ETL tool, streamlining the data transfer process.
Introduction to MongoDB
MongoDB is a popular NoSQL database management system designed for flexibility, scalability, and performance in handling unstructured or semistructured data. This document-oriented database presents a view wherein data is stored as flexible JSON-like documents instead of the traditional table-based relational databases. Data in MongoDB is stored in collections, which contain documents. Each document may have its own schema, which provides for dynamic and schema-less data storage. It also supports rich queries, indexing, and aggregation.
Key Use Cases
Real-time Analytics: You can leverage its aggregation framework and indexing capabilities to handle large volumes of data for real-time analytics and reporting.
Personalization/Customization: It can efficiently support applications that require real-time personalization and recommendation engines by storing and querying user behavior and preferences.
Introduction to Snowflake
Snowflake is a fully managed service that provides customers with near-infinite scalability of concurrent workloads to easily integrate, load, analyze, and securely share their data. Its common applications include data lakes, data engineering, data application development, data science, and secure consumption of shared data.
Snowflake’s unique architecture natively integrates computing and storage. This architecture enables you to virtually enable your users and data workloads to access a single copy of your data without any detrimental effect on performance.
With Snowflake, you can seamlessly run your data solution across multiple regions and Clouds for a consistent experience. Snowflake makes it possible by abstracting the complexity of underlying Cloud infrastructures.
Advantages of Snowflake
Scalability: Using Snowflake, you can automatically scale the compute and storage resources to manage varying workloads without any human intervention.
Supports Concurrency: Snowflake delivers high performance when dealing with multiple users supporting mixed workloads without performance degradation.
Efficient Performance: You can achieve optimized query performance through the unique architecture of Snowflake, with particular techniques applied in columnar storage, query optimization, and caching.
Migrate from MongoDB to SnowflakeGet a DemoTry itMigrate from MongoDB to BigQueryGet a DemoTry itMigrate from MongoDB to RedshiftGet a DemoTry it
Understanding the Methods to Connect MongoDB to Snowflake
These are the methods you can use to move data from MongoDB to Snowflake:
Method 1: Using LIKE.TG Data to Move Data from MongoDB to Snowflake
Method 2: Writing Custom Scripts to Move Data from MongoDB to Snowflake
Method 3: Using Native Cloud Tools and Snowpipe for MongoDB to Snowflake
Method 1: Using LIKE.TG Data to Move Data from MongoDB to Snowflake
You can use LIKE.TG Data to effortlessly move your data from MongoDB to Snowflake in just two easy steps. Go through the detailed illustration provided below of moving your data using LIKE.TG to ease your work.
Learn more about LIKE.TG
Step 1: Configure MongoDB as a Source
LIKE.TG supports 150+ sources, including MongoDB. All you need to do is provide us with acces to your database.
Step 1.1: Select MongoDB as the source.
Step 1.2: Provide Credentials to MongoDB – You need to provide details like Hostname, Password, Database Name and Port number so that LIKE.TG can access your data from the database.
Step 1.3: Once you have filled in the required details, you can enable the Advanced Settings options that LIKE.TG provides.
Once done, Click on Test and Continue to test your connection to the database.
Step 2: Configure Snowflake as a Destination
After configuring your Source, you can select Snowflake as your destination. You need to have an active Snowflake account for this.
Step 2.1: Select Snowflake as the Destination.
Step 2.2: Enter Snowflake Configuration Details – You can enter the Snowflake Account URL that you obtained. Also, Database User, Database Password, Database Name, and Database Schema.
Step 2.3: You can now click on Save Destination.
After the connection has been successfully established between the source and the destination, data will start flowing automatically. That’s how easy LIKE.TG makes it for you.
With this, you have successfully set up MongoDB to Snowflake Integration using LIKE.TG Data.
Learn how to set up MongoDB as a source.
Learn how to set up Snowflake as a destination.
Here are a few advantages of using LIKE.TG :
Easy Setup and Implementation– LIKE.TG is a self-serve, managed data integration platform. You can cut down your project timelines drastically as LIKE.TG can help you move data from SFTP/FTP to Snowflake in minutes.
Transformations – LIKE.TG provides preload transformations through Python code. It also allows you to run transformation code for each event in the pipelines you set up. You need to edit the event object’s properties received in the transform method as a parameter to carry out the transformation. LIKE.TG also offers drag-and-drop transformations like Date and Control Functions, JSON, and Event Manipulation to name a few. These can be configured and tested before putting them to use.
Connectors – LIKE.TG supports 150+ integrations to SaaS platforms, files, databases, analytics, and BI tools. It supports various destinations including Google BigQuery, Amazon Redshift, Snowflake Data Warehouses; Amazon S3 Data Lakes; and MySQL, MongoDB, TokuDB, DynamoDB, and PostgreSQL databases to name a few.
150+ Pre-built integrations– In addition to SFTP/FTP, LIKE.TG can bring data from150+ other data sourcesinto Snowflake in real-time. This will ensure that LIKE.TG is the perfect companion for your business’s growing data integration needs.
Complete Monitoring and Management– In case the FTP server or Snowflake data warehouse is not reachable, LIKE.TG will re-attempt data loads in a set instance ensuring that you always have accurate, up-to-date data in Snowflake.
24×7 Support– To ensure that you get timely help, LIKE.TG has a dedicated support team to swiftly join data has a dedicated support team that is available 24×7 to ensure that you are successful with your project.
Simplify your Data Analysis with LIKE.TG today!
SIGN UP HERE FOR A 14-DAY FREE TRIAL!
Method 2: Writing Custom Scripts to Move Data from MongoDB to Snowflake
Below is a quick snapshot of the broad framework to move data from MongoDB to Snowflake using custom code.
The steps are:
Step 1:Extracting data from MongoDB Collections
Step 2: Optional Data Type conversions and Data Formatting
Step 3: Staging Data Files
Step 4: Copying Staged Files to Snowflake Table
Step 5: Migrating to Snowflake
Let’s take a detailed look at all the required steps for MongoDB Snowflake Integration:
Migrate your data seamlessly
[email protected]">
No credit card required
Step 1:Extracting data from MongoDB Collections
mongoexport is the utility coming with MongoDB which can be used to create JSON or CSV export of the data stored in any MongoDB collection.
The following points are to be noted while using mongoexport :
mongoexport should be running directly in the system command line, not from the Mongo shell (the mongo shell is the command-line tool used to interact with MongoDB)
That the connecting user should have at least the read role on the target database. Otherwise, a permission error will be thrown.
mongoexport by default uses primary read (direct read operations to the primary member in a replica set) as the read preference when connected to mongos or a replica set.
Also, note that the default read preference which is “primary read” can be overridden using the –readPreference option
Below is an example showing how to export data from the collection named contact_coln to a CSV file in the location /opt/exports/csv/col_cnts.csv
mongoexport --db users --collection contact_coln --type=csv --fields empl_name,empl_address --out /opt/exports/csv/empl_contacts.csv
To export in CSV format, you should specify the column names in the collection to be exported. The above example specifies the empl_name and empl_address fields to export.
The output would look like this:
empl_name, empl_address
Prasad, 12 B street, Mumbai
Rose, 34544 Mysore
You can also specify the fields to be exported in a file as a line-separated list of fields to export – with one field per line. For example, you can specify the emplyee_name and employee_address fields in a file empl_contact_fields.txt :
empl_name,
empl_address
Then, applying the –fieldFile option, define the fields to export with the file:
mongoexport --db users --collection contact_coln --type=csv --fieldFile empl_contact_fields.txt --out /opt/backups/emplyee_contacts.csv
Exported CSV files will have field names as a header by default. If you don’t want a header in the output file,–noHeaderLine option can be used.
As in the above example –fields can be used to specify fields to be exported. It can also be used to specify nested fields. Suppose you have post_code filed with employee_address filed, it can be specified as employee_address.post_code
Incremental Data Extract From MongoDB
So far we have discussed extracting an entire MongoDB collection. It is also possible to filter the data while extracting from the collection by passing a query to filter data. This can be used for incremental data extraction. –query or -q is used to pass the query.For example, let’s consider the above-discussed contacts collection. Suppose the ‘updated_time’ field in each document stores the last updated or inserted Unix timestamp for that document.
mongoexport -d users -c contact_coln -q '{ updated_time: { $gte: 154856788 } }' --type=csv --fieldFile employee_contact_fields.txt --out exportdir/emplyee_contacts.csv
The above command will extract all records from the collection with updated_time greater than the specified value,154856788. You should keep track of the last pulled updated_time separately and use that value while fetching data from MongoDB each time.
Step 2: Optional Data Type conversions and Data Formatting
Along with the application-specific logic to be applied while transferring data, the following are to be taken care of when migrating data to Snowflake.
Snowflake can support many of the character sets including UTF-8. For the full list of supported encodings please visit here.
If you have worked with cloud-based data warehousing solutions before, you might have noticed that most of them lack support constraints and standard SQL constraints like UNIQUE, PRIMARY KEY, FOREIGN KEY, NOT NULL. However, keep in mind that Snowflake supports most of the SQL constraints.
Snowflake data types cover all basic and semi-structured types like arrays. It also has inbuilt functions to work with semi-structured data. The below list shows Snowflake data types compatible with the various MongoDB data types.
As you can see from this table of MongoDB vs Snowflake data types, while inserting data, Snowflake allows almost all of the date/time formats. You can explicitly specify the format while loading data with the help of the File Format Option. We will discuss this in detail later. The full list of supported date and time formats can be found here.
Step 3: Staging Data Files
If you want to insert data into a Snowflake table, the data should be uploaded to online storage like S3. This process is called staging. Generally, Snowflake supports two types of stages – internal and external.
Internal Stage
For every user and table, Snowflake will create and allocate a staging location that is used by default for staging activities and those stages are named using some conventions as mentioned below. Note that is also possible to create named internal stages.
The user stage is named ‘@~’
The name of the table stage is the name of the table.
The user or table stages can’t be altered or dropped.
It is not possible to set file format options in the default user or table stages.
Named internal stages can be created explicitly using SQL statements. While creating named internal stages, file format, and other options can be set which makes loading data to the table very easy with minimal command options.
SnowSQL comes with a lightweight CLI client which can be used to run commands like DDLs or data loads. This is available in Linux/Mac/Windows. Read more about the tool and options here.
Below are some example commands to create a stage:
Create a names stage:
create or replace stage my_mongodb_stage
copy_options = (on_error='skip_file')
file_format = (type = 'CSV' field_delimiter = '|' skip_header = 2);
The PUT command is used to stage data files to an internal stage. The syntax is straightforward – you only need to specify the file path and stage name :
PUT file://path_to_file/filename internal_stage_name
Eg:
Upload a file named emplyee_contacts.csv in the /tmp/mongodb_data/data/ directory to an internal stage named mongodb_stage
put file:////tmp/mongodb_data/data/emplyee_contacts.csv @mongodb_stage;
There are many configurations to be set to maximize data load spread while uploading the file like the number of parallelisms, automatic compression of data files, etc. More information about those options is listed here.
External Stage
AWS and Azure are the industry leaders in the public cloud market. It does not come as a surprise that Snowflake supports both Amazon S3 and Microsoft Azure for external staging locations. If the data is in S3 or Azure, all you need to do is create an external stage to point that and the data can be loaded to the table.
To create an external stage on S3, IAM credentials are to be specified. If the data in S3 is encrypted, encryption keys should also be given.
create or replace stage mongod_ext_stage url='s3://snowflake/data/mongo/load/files/'
credentials=(aws_key_id='181a233bmnm3c' aws_secret_key='a00bchjd4kkjx5y6z');
encryption=(master_key = 'e00jhjh0jzYfIjka98koiojamtNDwOaO8=');
Data to the external stage can be uploaded using respective cloud web interfaces or provided SDKs or third-party tools.
Step 4: Copying Staged Files to Snowflake Table
COPY INTO is the command used to load data from the stage area into the Snowflake table. Compute resources needed to load the data are supplied by virtual warehouses and the data loading time will depend on the size of the virtual warehouses
Eg:
To load from a named internal stage
copy into mongodb_internal_table
from @mngodb_stage;
To load from the external stage :(Here only one file is specified)
copy into mongodb_external_stage_table
from @mongodb_ext_stage/tutorials/dataloading/employee_contacts_ext.csv;
To copy directly from an external location without creating a stage:
copy into mongodb_table
from s3://mybucket/snow/mongodb/data/files
credentials=(aws_key_id='$AWS_ACCESS_KEY_ID' aws_secret_key='$AWS_SECRET_ACCESS_KEY')
encryption=(master_key = 'eSxX0jzYfIdsdsdsamtnBKOSgPH5r4BDDwOaO8=')
file_format = (format_name = csv_format);
The subset of files can be specified using patterns
copy into mongodb_table
from @mongodb_stage
file_format = (type = 'CSV')
pattern='.*/.*/.*[.]csv[.]gz';
Some common format options used in COPY command for CSV format :
COMPRESSION – Compression used for the input data files.
RECORD_DELIMITER – The character used as records or lines separator
FIELD_DELIMITER -Character used for separating fields in the input file.
SKIP_HEADER – Number of header lines to skip while loading data.
DATE_FORMAT – Used to specify the date format
TIME_FORMAT – Used to specify the time format
The full list of options is given here.
Download the Cheatsheet on How to Set Up ETL to Snowflake
Learn the best practices and considerations for setting up high-performance ETL to Snowflake
Step 5: Migrating to Snowflake
While discussing data extraction from MongoDB both full and incremental methods are considered. Here, we will look at how to migrate that data into Snowflake effectively.
Snowflake’s unique architecture helps to overcome many shortcomings of existing big data systems. Support for row-level updates is one such feature.
Out-of-the-box support for the row-level updates makes delta data load to the Snowflake table simple. We can extract the data incrementally, load it into a temporary table and modify records in the final table as per the data in the temporary table.
There are three popular methods to update the final table with new data after new data is loaded into the intermediate table.
Update the rows in the final table with the value in a temporary table and insert new rows from the temporary table into the final table.
UPDATE final_mongodb_table t
SET t.value = s.value
FROM intermed_mongdb_table in
WHERE t.id = in.id;
INSERT INTO final_mongodb_table (id, value)
SELECT id, value
FROM intermed_mongodb_table
WHERE NOT id IN (SELECT id FROM final_mongodb_table);
2. Delete all rows from the final table which are also present in the temporary table. Then insert all rows from the intermediate table to the final table.
DELETE .final_mogodb_table f
WHERE f.id IN (SELECT id from intermed_mongodb_table);
INSERT final_mongodb_table (id, value)
SELECT id, value
FROM intermed_mongodb_table;
3. MERGE statement – Using a single MERGE statement both inserts and updates can be carried out simultaneously. We can use this option to apply changes to the temporary table.
MERGE into final_mongodb_table t1 using tmp_mongodb_table t2 on t1.key = t2.key
WHEN matched then update set value = t2.value
WHEN not matched then INSERT (key, value) values (t2.key, t2.value);
Limitations of using Custom Scripts to Connect MongoDB to Snowflake
Even though the manual method will get your work done but you might face some difficulties while doing it. I have listed below some limitations that might hinder your data migration process:
If you want to migrate data from MongoDB to Snowflake in batches, then this approach works decently well. However, if you are looking for real-time data availability, this approach becomes extremely tedious and time-consuming.
With this method, you can only move data from one place to another, but you cannot transform the data when in transit.
When you write code to extract a subset of data, those scripts often break as the source schema keeps changing or evolving. This can result in data loss.
The method mentioned above has a high scope of errors. This might impact Snowflake’s availability and accuracy of data.
Method 3: Using Native Cloud Tools and Snowpipe for MongoDB to Snowflake
Snowpipe, provided by Snowflake, enables a shift from the traditional scheduled batch loading jobs to a more dynamic approach. It supersedes the conventional SQL COPY command, facilitating near real-time data availability. Essentially, Snowpipe imports data into a staging area in smaller increments, working in tandem with your cloud provider’s native services, such as AWS or Azure.
For illustration, consider these scenarios for each cloud provider, detailing the integration of your platform’s infrastructure and the transfer of data from MongoDB to a Snowflake warehouse:
AWS: Utilize a Kinesis delivery stream to deposit MongoDB data into an S3 bucket. With an active SNS system, the associated successful run ID can be leveraged to import data into Snowflake using Snowpipe.
Azure: Activate Snowpipe with an Event Grid message corresponding to Blob storage events. Your MongoDB data is initially placed into an external Azure stage. Upon creating a blob storage event message, Snowpipe is alerted via Event Grid when the data is primed for Snowflake insertion. Subsequently, Snowpipe transfers the queued files into a pre-established table in Snowflake. For comprehensive guidance, Snowflake offers a detailed manual on the setup.
Limitations of Using Native Cloud Tools and Snowpipe
A deep understanding of NoSQL databases, Snowflake, and cloud services is crucial. Troubleshooting in a complex data pipeline environment necessitates significant domain knowledge, which may be challenging for smaller or less experienced data teams.
Long-term management and ownership of the approach can be problematic, as the resources used are often controlled by teams outside the Data department. This requires careful coordination with other engineering teams to establish clear ownership and ongoing responsibilities.
The absence of native tools for applying schema to NoSQL data presents difficulties in schematizing the data, potentially reducing its value in the data warehouse.
MongoDB to Snowflake: Use Cases
Snowflake’s system supports JSON natively, which is central to MongoDB’s document model. This allows direct loading of JSON data into Snowflake without needing to convert it into a fixed schema, eliminating the need for an ETL pipeline and concerns about evolving data structures.
Snowflake’s architecture is designed for scalability and elasticity online. It can handle large volumes of data at varying speeds without resource conflicts with analytics, supporting micro-batch loading for immediate data analysis. Scaling up a virtual warehouse can speed up data loading without causing downtime or requiring data redistribution.
Snowflake’s core is a powerful SQL engine that works seamlessly with BI and analytics tools. Its SQL capabilities extend beyond relational data, enabling access to MongoDB’s JSON data, with its variable schema and nested structures, through SQL. Snowflake’s extensions and the creation of relational views make this JSON data readily usable with SQL-based tools.
Additional Resources for MongoDB Integrations and Migrations
Stream data from mongoDB Atlas to BigQuery
Move Data from MongoDB to MySQL
Connect MongoDB to Tableau
Sync Data from MongoDB to PostgreSQL
Move Data from MongoDB to Redshift
Conclusion
In this blog we have three methods using which you can migrate your data from MongoDB to Snowflake. However, the choice of migration method can impact the process’s efficiency and complexity. Using custom scripts or Snowpipe for data ingestion may require extensive manual effort, face challenges with data consistency and real-time updates, and demand specialized technical skills.
For using the Native Cloud Tools, you will need a deep understanding of NoSQL databases, Snowflake, and cloud services. Moreover, troubleshooting can also be troublesome in such an environment. On the other hand, leveraging LIKE.TG simplifies and automates the migration process by providing a user-friendly interface and pre-built connectors.
VISIT OUR WEBSITE TO EXPLORE LIKE.TG
Want to take LIKE.TG for a spin?
SIGN UP to explore a hassle-free data migration from MongoDB to Snowflake. You can also have a look at the unbeatablepricingthat will help you choose the right plan for your business needs.
Share your experience of migrating data from MongoDB to Snowflake in the comments section below!
FAQs to migrate from MongoDB to Snowflake
1. Does MongoDB work with Snowflake?
Yes, MongoDB can work with Snowflake through data integration and migration processes.
2. How do I migrate a database to a Snowflake?
To migrate a database to Snowflake:1. Extract data from the source database using ETL tools or scripts.2. Load the extracted data into Snowflake using Snowflake’s data loading utilities or ETL tools, ensuring compatibility and data integrity throughout the process.
3. Can Snowflake handle NoSQL?
While Snowflake supports semi-structured data such as JSON, Avro, and Parquet, it is not designed to directly manage NoSQL databases.
4. Which SQL is used in Snowflake?
Snowflake uses ANSI SQL (SQL:2003 standard) for querying and interacting with data.

Replicating data from MySQL to BigQuery: 2 Easy Methods
With the BigQuery MySQL Connector, users can perform data analysis on MySQL data stored in BigQuery without the need for complex data migration processes. With MySQL BigQuery integration, organizations can leverage the scalability and power of BigQuery for handling large datasets stored in MySQL.Migrate MySQL to BigQuery can be a complex undertaking, necessitating thorough testing and validation to minimize downtime and ensure a smooth transition. This blog will provide 2 easy methods to connect MySQL to BigQuery in real time. The first method uses LIKE.TG ’s automated Data Pipeline to set up this connection while the second method involves writing custom ETL Scripts to perform this data transfer from MySQL to BigQuery. Read along and decide which method suits you the best!
Methods to Connect MySQL to BigQuery
Following are the 2 methods using which you can set up your MySQL to BigQuery integration:
Method 1: Using LIKE.TG Data to Connect MySQL to BigQuery
Method 2: Manual ETL Process to Connect MySQL to BigQuery
Method 1: Using LIKE.TG Data to Connect MySQL to BigQuery
LIKE.TG is the only real-time ELT No-code Data Pipeline platform that cost-effectively automates data pipelines that are flexible to your needs. With integration with 150+ Data Sources (40+ free sources), we help you not only export data from sources load data to the destinations but also transform enrich your data, make it analysis-ready.
Get Started with LIKE.TG for Free
With a ready-to-use Data Integration Platform, LIKE.TG , you can easily move data from MySQL to BigQuery with just 2 simple steps. This does not need you to write any code and will provide you with an error-free, fully managed setup to move data in minutes.
Step 1: Connect and configure your MySQL database.
ClickPIPELINESin theNavigation Bar.
Click+ CREATEin thePipelines List View.
In theSelect Source Typepage, select the MySQL as your source.
In theConfigure your MySQL Sourcepage, specify the connection settings for your MySQL Source.
Step 2: Choose BigQuery as your Destination
ClickDESTINATIONSin theNavigation Bar.
Click+ CREATEin theDestinations List View.
InAdd Destinationpage selectGoogleBigQueryas the Destination type.
In theConfigure your GoogleBigQuery Warehousepage, specify the following details:
It is that simple. While you relax, LIKE.TG will fetch the data and send it to your destination Warehouse.
Instead of building a lot of these custom connections, ourselves, LIKE.TG Data has been really flexible in helping us meet them where they are.
– Josh Kennedy, Head of Data and Business Systems
In addition to this, LIKE.TG lets you bring data from a wide array of sources – Cloud Apps, Databases, SDKs, and more. You can check out the complete list of available integrations.
SIGN UP HERE FOR A 14-DAY FREE TRIAL
Method 2: Manual ETL Process to Connect MySQL to BigQuery
The manual method of connecting MySQL to BigQuery involves writing custom ETL scripts to set up this data transfer process. This method can be implemented in 2 different forms:
Full Dump and Load
Incremental Dump and Load
1. Full Dump and Load
This approach is relatively simple, where complete data from the source MySQL table is extracted and migrated to BigQuery. If the target table already exists, drop it and create a new table ( Or delete complete data and insert newly extracted data).
Full Dump and Load is the only option for the first-time load even if the incremental load approach is used for recurring loads. The full load approach can be followed for relatively small tables even for further recurring loads. You can also check out MySQL to Redshift integration.
The high-level steps to be followed to replicate MySQL to BigQuery are:
Step 1: Extract Data from MySQL
Step 2: Clean and Transform the Data
Step 3: Upload to Google Cloud Storage(GCS)
Step 4: Upload to the BigQuery Table from GCS
Let’s take a detailed look at each step to migrate sqlite to mariadb.
Step 1: Extract Data from MySQL
There are 2 popular ways to extract data from MySQL – using mysqldump and using SQL query.
Extract data using mysqldump
Mysqldump is a client utility coming with Mysql installation. It is mainly used to create a logical backup of a database or table. Here, is how it can be used to extract one table:
mysqldump -u <db_username> -h <db_host> -p db_name table_name > table_name.sql
Here output file table_name.sql will be in the form of insert statements like
INSERT INTO table_name (column1, column2, column3, ...)
VALUES (value1, value2, value3, ...);
This output has to be converted into a CSV file. You have to write a small script to perform this. Here is a well-accepted python library doing the same – mysqldump_to_csv.py
Alternatively, you can create a CSV file using the below command. However, this option works only when mysqldump is run on the same machine as the mysqld server which is not the case normally.
mysqldump -u [username] -p -t -T/path/to/directory [database] --fields-terminated-by=,
Extract Data using SQL query
MySQL client utility can be used to run SQL commands and redirect output to file.
mysql -B -u user database_name -h mysql_host -e "select * from table_name;" >
table_name_data_raw.txt
Further, it can be piped with text editing utilities like sed or awk to clean and format data.
Example:
mysql -B -u user database_name -h mysql_host -e "select * from table_name;" |
sed "s/'/'/;s/t/","/g;s/^/"/;s/$/"/;s/n//g" > table_name_data.csv
Step 2: Clean and Transform the Data
Apart from transforming data for business logic, there are some basic things to keep in mind:
BigQuery expects CSV data to be UTF-8 encoded.
BigQuery does not enforce Primary Key and unique key constraints. ETL process has to take care of that.
Column types are slightly different. Most of the types have either equivalent or convertible types. Here is a list of common data types.
Fortunately, the default date format in MySQL is the same, YYYY-MM-DD. Hence, while taking mysqldump there is no need to do any specific changes for this. If you are using a string field to store date and want to convert to date while moving to BigQuery you can use STR_TO_DATE function.DATE value must be dash(-) separated and in the form YYYY-MM-DD (year-month-day). You can visit theirofficial page to know more about BigQuery data types.
Syntax :
STR_TO_DATE(str,format)
Example :
SELECT STR_TO_DATE('31,12,1999','%d,%m,%Y');
Result :
1999-12-31
The hh:mm: ss (hour-minute-second) portion of the timestamp must use a colon (:) separator.
Make sure text columns are quoted if it can potentially have delimiter characters.
Step 3: Upload to Google Cloud Storage(GCS)
Gsutil is a command-line tool for manipulating objects in GCS. It can be used to upload files from different locations to your GCS bucket.
To copy a file to GCS:
gsutil cp table_name_data.csv gs://my-bucket/path/to/folder/
To copy an entire folder:
gsutil cp -r dir gs://my-bucket/path/to/parent/
If the files are present in S3, the same command can be used to transfer to GCS.
gsutil cp -R s3://bucketname/source/path gs://bucketname/destination/path
Storage Transfer Service
Storage Transfer Service from Google cloud is another option to upload files to GCS from S3 or other online data sources like HTTP/HTTPS location. Destination or sink is always a Cloud Storage bucket. It can also be used to transfer data from one GCS bucket to another.
This service is extremely handy when comes to data movement to GCS with support for:
Schedule one-time or recurring data transfer.
Delete existing objects in the destination if no corresponding source object is present.
Deletion of source object after transferring.
Periodic synchronization between source and sink with advanced filters based on file creation dates, file name, etc.
Upload from Web Console
If you are uploading from your local machine, web console UI can also be used to upload files to GCS. Here are the steps to upload a file to GCS with screenshots.
Login to your GCP account. In the left bar, click Storage and go to Browser.
2. Select the GCS bucket you want to upload the file.Here the bucket we are using is test-data-LIKE.TG . Click on the bucket.
3. On the bucket details page below, click the upload files button and select file from your system.
4. Wait till the upload is completed. Now, the uploaded file will be listed in the bucket:
Step 4: Upload to the BigQuery Table from GCS
You can use the bq command to interact with BigQuery. It is extremely convenient to upload data to the table from GCS.Use the bq load command, and specify CSV as the source_format.
The general syntax of bq load:
bq --location=[LOCATION] load --source_format=[FORMAT] [DATASET].[TABLE] [PATH_TO_SOURCE] [SCHEMA]
[LOCATION] is your location. This is optional.[FORMAT] is CSV.[DATASET] is an existing dataset.[TABLE] is the name of the table into which you’re loading data.[PATH_TO_SOURCE] is a fully-qualified Cloud Storage URI.[SCHEMA] is a valid schema. The schema can be a local JSON file or inline.– autodetect flag also can be used instead of supplying a schema definition.
There are a bunch of options specific to CSV data load :
To see full list options visit Bigquery documentation on loading data cloud storage CSV, visit here.
Following are some example commands to load data:
Specify schema using a JSON file:
bq --location=US load --source_format=CSV mydataset.mytable gs://mybucket/mydata.csv ./myschema.json
If you want schema auto-detected from the file:
bq --location=US load --autodetect --source_format=CSV mydataset.mytable gs://mybucket/mydata.csv
If you are writing to the existing table, BigQuery provides three options – Write if empty, Append to the table, Overwrite table. Also, it is possible to add new fields to the table while uploading data. Let us see each with an example.
To overwrite the existing table:
bq --location=US load --autodetect --replace --source_format=CSV mydataset.mytable
gs://mybucket/mydata.csv
To append to an existing table:
bq --location=US load --autodetect --noreplace --source_format=CSV mydataset.mytable
gs://mybucket/mydata.csv ./myschema.json
To add a new field to the table. Here new schema file with an extra field is given :
bq --location=asia-northeast1 load --noreplace --schema_update_option=ALLOW_FIELD_ADDITION
--source_format=CSV mydataset.mytable gs://mybucket/mydata.csv ./myschema.json
2. Incremental Dump and Load
In certain use cases, loading data once from MySQL to BigQuery will not be enough. There might be use cases where once initial data is extracted from the source, we need to keep the target table in sync with the source. For a small table doing a full data dump every time might be feasible but if the volume data is higher, we should think of a delta approach.
The following steps are used in the Incremental approach to connect MySQL to Bigquery:
Step 1: Extract Data from MySQL
Step 2: Update Target Table in BigQuery
Step 1: Extract Data from MySQL
For incremental data extraction from MySQL use SQL with proper predicates and write output to file. mysqldump cannot be used here as it always extracts full data.
Eg: Extracting rows based on the updated_timestamp column and converting to CSV.
mysql -B -u user database_name -h mysql_host -e "select * from table_name where
updated_timestamp < now() and updated_timestamp >'#max_updated_ts_in_last_run#'"|
sed "s/'/'/;s/t/","/g;s/^/"/;s/$/"/;s/n//g" > table_name_data.csv
Note: In case of any hard delete happened in the source table, it will not be reflected in the target table.
Step 2: Update Target Table in BigQuery
First, upload the data into a staging table to upsert newly extracted data to the BigQuery table. This will be a full load. Please refer full data load section above. Let’s call it delta_table. Now there are two approaches to load data to the final table:
Update the values existing records in the final table and insert new rows from the delta table which are not in the final table.
UPDATE data_set.final_table t
SET t.value = s.value
FROM data_set.delta_table s
WHERE t.id = s.id;
INSERT data_set.final_table (id, value)
SELECT id, value
FROM data_set.delta_table
WHERE NOT id IN (SELECT id FROM data_set.final_table);
2. Delete rows from the final table which are present in the delta table. Then insert all rows from the delta table to the final table.
DELETE data_set.final_table f
WHERE f.id IN (SELECT id from data_set.delta_table);
INSERT data_set.final_table (id, value)
SELECT id, value
FROM data_set.delta_table;
Disadvantages of Manually Loading Data
Manually loading data from MySQL to BigQuery presents several drawbacks:
Cumbersome Process: While custom code suits one-time data movements, frequent updates become burdensome manually, leading to inefficiency and bulkiness.
Data Consistency Issues: BigQuery lacks guaranteed data consistency for external sources, potentially causing unexpected behavior during query execution amidst data changes.
Location Constraint: The data set’s location must align with the Cloud Storage Bucket’s region or multi-region, restricting flexibility in data storage.
Limitation with CSV Format: CSV files cannot accommodate nested or repeated data due to format constraints, limiting data representation possibilities.
File Compression Limitation: Mixing compressed and uncompressed files in the same load job using CSV format is not feasible, adding complexity to data loading tasks.
File Size Restriction: The maximum size for a gzip file in CSV format is capped at 4 GB, potentially limiting the handling of large datasets efficiently.
What Can Be Migrated From MySQL To BigQuery?
Since the 1980s, MySQL has been the most widely used open-source relational database management system (RDBMS), with businesses of all kinds using it today.
MySQL is fundamentally a relational database. It is renowned for its dependability and speedy performance and is used to arrange and query data in systems of rows and columns.
Both MySQL and BigQuery use tables to store their data. When you migrate a table from MySQL to BigQuery, it is stored as a standard, or managed, table.
Both MySQL and BigQuery employ SQL, but they accept distinct data types, therefore you’ll need to convert MySQL data types to BigQuery equivalents. Depending on the data pipeline you utilize, there are several options for dealing with this.
Once in BigQuery, the table is encrypted and kept in Google’s warehouse. Users may execute complicated queries or accomplish any BigQuery-enabled job.
The Advantages of Connecting MySQL To BigQuery
BigQuery is intended for efficient and speedy analytics, and it does so without compromising operational workloads, which you will most likely continue to manage in MySQL.
It improves workflows and establishes a single source of truth. Switching between platforms can be difficult and time-consuming for analysts. Updating BigQuery with MySQL ensures that both data storage systems are aligned around the same source of truth and that other platforms, whether operational or analytical, are constantly bringing in the right data.
BigQuery increases data security. By replicating data from MySQL to BigQuery, customers avoid the requirement to provide rights to other data engineers on operational systems.
BigQuery handles Online Analytical Processing (OLAP), whereas MySQL is designed for Online Transaction Processing (OLTP). Because it is a cost-effective, serverless, and multi-cloud data warehouse, BigQuery can deliver deeper data insights and aid in the conversion of large data into useful insights.
Conclusion
The article listed 2 methods to set up your BigQuery MySQL integration. The first method relies on LIKE.TG ’s automated Data Pipeline to transfer data, while the second method requires you to write custom scripts to perform ETL processes from MySQL to BigQuery.
Complex analytics on data requires moving data to Data Warehouses like BigQuery. It takes multiple steps to extract data, clean it and upload it. It requires real effort to ensure there is no data loss at each stage of the process, whether it happens due to data anomalies or type mismatches.
Visit our Website to Explore LIKE.TG
Want to take LIKE.TG for a spin? Sign Up for a 14-day free trial and experience the feature-rich LIKE.TG suite first hand. Check out LIKE.TG pricing to choose the best plan for your organization.
Share your understanding of connecting MySQL to BigQuery in the comments section below!

Oracle to Snowflake: Data Migration in 2 Easy Methods
var source_destination_email_banner = 'true';
Migrating from Oracle to Snowflake? This guide outlines two straightforward methods to move your data. Learn how to leverage Snowflake’s cloud architecture to access insights from your Oracle databases.Ultimately, you can choose the best of both methods based on your business requirements. Read along to learn how to migrate data seamlessly from Oracle to Snowflake.
Overview of Oracle
Oracle Database is a robust relational database management system (RDBMS) known for its scalability, reliability, and advanced features like high availability and security. Oracle offers an integrated portfolio of cloud services featuring IaaS, PaaS, and SaaS, posing competition to big cloud providers. The company also designs and markets enterprise software solutions in the areas of ERP, CRM, SCM, and HCM, addressing a wide range of industries such as finance, health, and telecommunication institutions.
Overview of Snowflake
Snowflake is a cloud-based data warehousing platform designed for modern data analytics and processing. Snowflake separates compute, storage, and services. Therefore, they may scale independently with a SQL data warehouse for querying and analyzing structured and semi-structured data stored in Amazon S3 or Azure Blob Storage.
Advantages of Snowflake
Scalability: Using Snowflake, you can automatically scale the compute and storage resources to manage varying workloads without any human intervention.
Supports Concurrency: Snowflake delivers high performance when dealing with multiple users supporting mixed workloads without performance degradation.
Efficient Performance: You can achieve optimized query performance through the unique architecture of Snowflake, with particular techniques applied in columnar storage, query optimization, and caching.
Why Choose Snowflake over Oracle?
Here, I have listed some reasons why Snowflake is chosen over Oracle.
Scalability and Flexibility: Snowflake is intrinsically designed for the cloud to deliver dynamic scalability with near-zero manual tuning or infrastructure management. Horizontal and vertical scaling can be more complex and expensive in traditional Oracle on-premises architecture.
Concurrency and Performance: Snowflake’s architecture supports automatic and elastic scaling, ensuring consistent performance even under heavy workloads. Whereas Oracle’s monolithic architecture may struggle with scalability and concurrency challenges as data volumes grow.
Ease of Use: Snowflake’s platform is known for its simplicity and ease of use. Although quite robust, Oracle normally requires specialized skills and resources in configuration, management, and optimization.
Common Challenges of Migration from Oracle to Snowflake
Let us also discuss what are the common challenges you might face while migrating your data from Oracle to Snowflake.
Architectural Differences: Oracle has a traditional on-premises architecture, while Snowflake has a cloud-native architecture. This makes the adaptation of existing applications and workflows developed for one environment into another quite challenging.
Compatibility Issues: There are differences in SQL dialects, data types, and procedural languages between Oracle and Snowflake that will have to be changed in queries, scripts, and applications to be migrated for compatibility and optimal performance.
Performance Tuning: Optimizing performance in Snowflake to Oracle’s performance levels at a minimum requires knowledge of Snowflake’s capabilities and the tuning configurations it offers, among many other special features such as clustering keys and auto-scaling.
Integrate Oracle with Snowflake in a hassle-free manner.
Method 1: Using LIKE.TG Data to Set up Oracle to Snowflake Integration
Using LIKE.TG Data, a No-code Data Pipeline, you can directly transfer data from Oracle to Snowflake and other Data Warehouses, BI tools, or a destination of your choice in a completely hassle-free automated manner.
Method 2: Manual ETL Process to Set up Oracle to Snowflake Integration
In this method, you can convert your Oracle data to a CSV file using SQL plus and then transform it according to the compatibility. You then can stage the files in S3 and ultimately load them into Snowflake using the COPY command. This method can be time taking and can lead to data inconsistency.
Get Started with LIKE.TG for Free
Methods to Set up Oracle to Snowflake Integration
There are many ways of loading data from Oracle to Snowflake. In this blog, you will be going to look into two popular ways. Also you can read our article on Snowflake Excel integration.
In the end, you will have a good understanding of each of these two methods. This will help you to make the right decision based on your use case:
Method 1: Using LIKE.TG Data to Set up Oracle to Snowflake Integration
LIKE.TG Data, a No-code Data Pipeline, helps you directly transfer data from Oracle to Snowflake and other Data Warehouses, BI tools, or a destination of your choice in a completely hassle-free automated manner.
The steps to load data from Oracle to Snowflake using LIKE.TG Data are as follow:
Step 1: Configure Oracle as your Source
Connect your Oracle account to LIKE.TG ’s platform. LIKE.TG has an in-built Oracle Integration that connects to your account within minutes.
Log in to your LIKE.TG account, and in the Navigation Bar, click PIPELINES.
Next, in the Pipelines List View, click + CREATE.
On the Select Source Type page, select Oracle.
Specify the required information in the Configure your Oracle Source page to complete the source setup.
Step 2: Choose Snowflake as your Destination
Select Snowflake as your destination and start moving your data.
If you don’t already have a Snowflake account, read the documentation to know how to create one.
Log in to your Snowflake account and configure your Snowflake warehouse by running this script.
Next, obtain your Snowflake URL from your Snowflake warehouse by clicking on Admin > Accounts > LOCATOR.
On your LIKE.TG dashboard, click DESTINATIONS > + CREATE.
Select Snowflake as the destination in the Add Destination page.
Specify the required details in the Configure your Snowflake Warehouse page.
Click TEST CONNECTION > SAVE CONTINUE.
With this, you have successfully set up Oracle to Snowflake Integration using LIKE.TG Data.
For more details on Oracle to Snowflake integration, refer the LIKE.TG documentation:
Learn how to set up Oracle as a source.
Learn how to set up Snowflake as a destination.
Here’s what the data scientist at Hornblower, a global leader in experiences and transportation, has to say about LIKE.TG Data.
Data engineering is like an orchestra where you need the right people to play each instrument of their own, but LIKE.TG Data is like a band on its own. So, you don’t need all the players.
– Karan Singh Khanuja, Data Scientist, Hornblower
Using LIKE.TG as a solution to their data movement needs, they could easily migrate data to the warehouse without spending much on engineering resources. You can read the full story here.
Integrate Oracle to SnowflakeGet a DemoTry itIntegrate Oracle to BigQueryGet a DemoTry itIntegrate Oracle to PostgreSQLGet a DemoTry it
Method 2: Manual ETL Process to Set up Oracle to Snowflake Integration
Oracle and Snowflake are two distinct data storage options since their structures are very dissimilar. Although there is no direct way to load data from Oracle to Snowflake, using a mediator that connects to both Oracle and Snowflake can ease the process. Steps to move data from Oracle to Snowflake can be categorized as follows:
Step 1: Extract Data from Oracle to CSV using SQL*Plus
Step 2: Data Type Conversion and Other Transformations
Step 3: Staging Files to S3
Step 4: Finally, Copy Staged Files to the Snowflake Table
Let us go through these steps to connect Oracle to Snowflake in detail.
Step 1: Extract data from Oracle to CSV using SQL*Plus
SQL*Plus is a query tool installed with every Oracle Database Server or Client installation. It can be used to query and redirect the result of an SQL query to a CSV file. The command used for this is: Spool
Eg :
-- Turn on the spool
spool spool_file.txt
-- Run your Query
select * from dba_table;
-- Turn of spooling
spool off;
The spool file will not be visible until the command is turned off
If the Spool file doesn’t exist already, a new file will be created. If it exists, it will be overwritten by default. There is an append option from Oracle 10g which can be used to append to an existing file.
Most of the time the data extraction logic will be executed in a Shell script. Here is a very basic example script to extract full data from an Oracle table:
#!/usr/bin/bash
FILE="students.csv"
sqlplus -s user_name/password@oracle_db <<EOF
SET PAGESIZE 35000
SET COLSEP "|"
SET LINESIZE 230
SET FEEDBACK OFF
SPOOL $FILE
SELECT * FROM EMP;
SPOOL OFF
EXIT
EOF#!/usr/bin/bash
FILE="emp.csv"
sqlplus -s scott/tiger@XE <<EOF
SET PAGESIZE 50000
SET COLSEP ","
SET LINESIZE 200
SET FEEDBACK OFF
SPOOL $FILE
SELECT * FROM STUDENTS;
SPOOL OFF
EXIT
EOF
SET PAGESIZE – The number of lines per page. The header line will be there on every page.
SET COLSEP – Setting the column separator.
SET LINESIZE – The number of characters per line. The default is 80. You can set this to a value in a way that the entire record comes within a single line.
SET FEEDBACK OFF – In order to prevent logs from appearing in the CSV file, the feedback is put off.
SPOOL $FILE – The filename where you want to write the results of the query.
SELECT * FROM STUDENTS – The query to be executed to extract data from the table.
SPOOL OFF – To stop writing the contents of the SQL session to the file.
Incremental Data Extract
As discussed in the above section, once Spool is on, any SQL can be run and the result will be redirected to the specified file. To extract data incrementally, you need to generate SQL with proper conditions to select only records that are modified after the last data pull.
Eg:
select * from students where last_modified_time > last_pull_time and last_modified_time <= sys_time.
Now the result set will have only changed records after the last pull.
Integrate your data seamlessly
[email protected]">
No credit card required
Step 2: Data type conversion and formatting
While transferring data from Oracle to Snowflake, data might have to be transformed as per business needs. Apart from such use case-specific changes, there are certain important things to be noted for smooth data movement. Also, check out Oracle to MySQL Integration.
Many errors can be caused by character sets mismatch in source and target. Note that Snowflake supports all major character sets including UTF-8 and UTF-16. The full list can be found here.
While moving data from Oracle to Big Data systems most of the time data integrity might be compromised due to lack of support for SQL constraints. Fortunately, Snowflake supports all SQL constraints like UNIQUE, PRIMARY KEY, FOREIGN KEY, NOT NULL constraints which is a great help for making sure data has moved as expected.
Snowflake’s type system covers most primitive and advanced data types which include nested data structures like struct and array. Below is the table with information on Oracle data types and the corresponding Snowflake counterparts.
Often, date and time formats require a lot of attention while creating data pipelines. Snowflake is quite flexible here as well. If a custom format is used for dates or times in the file to be inserted into the table, this can be explicitly specified using “File Format Option”. The complete list of date and time formats can be found here.
Step 3: Stage Files to S3
To load data from Oracle to Snowflake, it has to be uploaded to a cloud staging area first. If you have your Snowflake instance running on AWS, then the data has to be uploaded to an S3 location that Snowflake has access to. This process is called staging. The snowflake stage can be either internal or external.
Internal Stage
If you chose to go with this option, each user and table will be automatically assigned to an internal stage which can be used to stage data related to that user or table. Internal stages can be even created explicitly with a name.
For a user, the default internal stage will be named as ‘@~’.
For a table, the default internal stage will have the same name as the table.
There is no option to alter or drop an internal default stage associated with a user or table.
Unlike named stages file format options cannot be set to default user or table stages.
If an internal stage is created explicitly by the user using SQL statements with a name, many data loading options can be assigned to the stage like file format, date format, etc. When data is loaded to a table through this stage those options are automatically applied.
Note: The rest of this document discusses many Snowflake commands. Snowflake comes with a very intuitive and stable web-based interface to run SQL and commands. However, if you prefer to work with a lightweight command-line utility to interact with the database you might like SnowSQL – a CLI client available in Linux/Mac/Windows to run Snowflake commands. Read more about the tool and options here.
Now let’s have a look at commands to create a stage:
Create a named internal stage my_oracle_stage and assign some default options:
create or replace stage my_oracle_stage
copy_options= (on_error='skip_file')
file_format= (type = 'CSV' field_delimiter = ',' skip_header = 1);
PUT is the command used to stage files to an internal Snowflake stage. The syntax of the PUT command is:
PUT file://path_to_your_file/your_filename internal_stage_name
Eg:
Upload a file items_data.csv in the /tmp/oracle_data/data/ directory to an internal stage named oracle_stage.
put file:////tmp/oracle_data/data/items_data.csv @oracle_stage;
While uploading the file you can set many configurations to enhance the data load performance like the number of parallelisms, automatic compression, etc. Complete information can be found here.
External Stage
Let us now look at the external staging option and understand how it differs from the internal stage. Snowflake supports any accessible Amazon S3 or Microsoft Azure as an external staging location. You can create a stage to pointing to the location data that can be loaded directly to the Snowflake table through that stage. No need to move the data to an internal stage.
If you want to create an external stage pointing to an S3 location, IAM credentials with proper access permissions are required. If data needs to be decrypted before loading to Snowflake, proper keys are to be provided. Here is an example to create an external stage:
create or replace stage oracle_ext_stage url='s3://snowflake_oracle/data/load/files/'
credentials=(aws_key_id='1d318jnsonmb5#dgd4rrb3c' aws_secret_key='aii998nnrcd4kx5y6z');
encryption=(master_key = 'eSxX0jzskjl22bNaaaDuOaO8=');
Once data is extracted from Oracle it can be uploaded to S3 using the direct upload option or using AWS SDK in your favorite programming language. Python’s boto3 is a popular one used under such circumstances. Once data is in S3, an external stage can be created to point to that location.
Step 4: Copy staged files to Snowflake table
So far – you have extracted data from Oracle, uploaded it to an S3 location, and created an external Snowflake stage pointing to that location. The next step is to copy data to the table. The command used to do this is COPY INTO. Note: To execute the COPY INTO command, compute resources in Snowflake virtual warehouses are required and your Snowflake credits will be utilized.
Eg:
To load from a named internal stage
copy into oracle_table
from @oracle_stage;
Loading from the external stage. Only one file is specified.
copy into my_ext_stage_table
from @oracle_ext_stage/tutorials/dataloading/items_ext.csv;
You can even copy directly from an external location without creating a stage:
copy into oracle_table
from s3://mybucket/oracle_snow/data/files
credentials=(aws_key_id='$AWS_ACCESS_KEY_ID' aws_secret_key='$AWS_SECRET_ACCESS_KEY')
encryption=(master_key = 'eSxX009jhh76jkIuLPH5r4BD09wOaO8=')
file_format = (format_name = csv_format);
Files can be specified using patterns
copy into oracle_pattern_table
from @oracle_stage
file_format = (type = 'TSV')
pattern='.*/.*/.*[.]csv[.]gz';
Some commonly used options for CSV file loading using the COPY command are:
DATE_FORMAT – Specify any custom date format you used in the file so that Snowflake can parse it properly.
TIME_FORMAT – Specify any custom date format you used in the file.
COMPRESSION – If your data is compressed, specify algorithms used to compress.
RECORD_DELIMITER – To mention lines separator character.
FIELD_DELIMITER – To indicate the character separating fields in the file.
SKIP_HEADER – This is the number of header lines to skipped while inserting data into the table.
Update Snowflake Table
We have discussed how to extract data incrementally from the Oracle table. Once data is extracted incrementally, it cannot be inserted into the target table directly. There will be new and updated records that have to be treated accordingly.
Earlier in this document, we mentioned that Snowflake supports SQL constraints. Adding to that, another surprising feature from Snowflake is support for row-level data manipulations which makes it easier to handle delta data load. The basic idea is to load incrementally extracted data into an intermediate or temporary table and modify records in the final table with data in the intermediate table. The three methods mentioned below are generally used for this.
1. Update the rows in the target table with new data (with the same keys). Then insert new rows from the intermediate or landing table which are not in the final table.
UPDATE oracle_target_table t SET t.value = s.value FROM landing_delta_table in WHERE t.id = in.id;
INSERT INTO oracle_target_table (id, value)
SELECT id, value FROM landing_delta_table WHERE NOT id IN (SELECT id FROM oracle_target_table);
2. Delete rows from the target table which are also in the landing table. Then insert all rows from the landing table to the final table. Now, the final table will have the latest data without duplicates
DELETE .oracle_target_table f WHERE f.id IN (SELECT id from landing_table); INSERT oracle_target_table (id, value)
SELECT id, value FROM landing_table;
3. MERGE Statement – Standard SQL merge statement which combines Inserts and updates. It is used to apply changes in the landing table to the target table with one SQL statement
MERGE into oracle_target_table t1 using landing_delta_table t2 on t1.id = t2.id
WHEN matched then update set value = t2.value
WHEN not matched then
INSERT (id, value) values (t2.id, t2.value);
This method of connecting Oracle to Snowflake works when you have a comfortable project timeline and a pool of experienced engineering resources that can build and maintain the pipeline. However, the method mentioned above comes with a lot of coding and maintenance overhead.
Limitations of Manual ETL Process
Here are some of the challenges of migrating from Oracle to Snowflake.
Cost:The cost of hiring an ETL Developer to construct an oracle to Snowflake ETL pipeline might not be favorable in terms of expenses. Method 1 is not a cost-efficient option.
Maintenance:Maintenance is very important for the data processing system; hence your ETL codes need to be updated regularly due to the fact that development tools upgrade their dependencies and industry standards change. Also, maintenance consumes precious engineering bandwidth which might be utilized elsewhere.
Scalability:Indeed, scalability is paramount! ETL systems can fail over time if conditions for processing fails. For example, what if incoming data increases 10X, can your processes handle such a sudden increase in load? A question like this requires serious thinking while opting for the manual ETL Code approach.
Benefits of Replicating Data from Oracle to Snowflake
Many business applications are replicating data from Oracle to Snowflake, not only because of the superior scalability but also because of the other advantages that set Snowflake apart from traditional Oracle environments. Many businesses use an Oracle to Snowflake converter to help facilitate this data migration.
Some of the benefits of data migration from Oracle to Snowflake include:
Snowflake promises high computational power. In case there are many concurrent users running complex queries, the computational power of the Snowflake instance can be changed dynamically. This ensures that there is less waiting time for complex query executions.
The agility and elasticity offered by the Snowflake Cloud Data warehouse solution are unmatched. This gives you the liberty to scale only when you needed and pay for what you use.
Snowflake is a completely managed service. This means you can get your analytics projects running with minimal engineering resources.
Snowflake gives you the liberty to work seamlessly with Semi-structured data. Analyzing this in Oracle is super hard.
Conclusion
In this article, you have learned about two different approaches to set up Oracle to Snowflake Integration. The manual method involves the use of SQL*Plus and also staging the files to Amazon S3 before copying them into the Snowflake Data Warehouse. This method requires more effort and engineering bandwidth to connect Oracle to Snowflake. Whereas, if you require real-time data replication and looking for a fully automated real-time solution, then LIKE.TG is the right choice for you. The many benefits of migrating from Oracle to Snowflake make it an attractive solution.
Learn more about LIKE.TG
Want to try LIKE.TG ?
Sign Up for a 14-day free trialand experience the feature-rich LIKE.TG suite first hand.
FAQs to connect Oracle to Snowflake
1. How do you migrate from Oracle to Snowflake?
To migrate from Oracle to Snowflake, export data from Oracle using tools like Oracle Data Pump or SQL Developer, transform it as necessary, then load it into Snowflake using Snowflake’s COPY command or bulk data loading tools like SnowSQL or third-party ETL tools like LIKE.TG Data.
2. What is the most efficient way to load data into Snowflake?
The most efficient way to load data into Snowflake is through its bulk loading options like Snowflake’s COPY command, which supports loading data in parallel directly from cloud storage (e.g., AWS S3, Azure Blob Storage) into tables, ensuring fast and scalable data ingestion.
3. Why move from SQL Server to Snowflake?
Moving from SQL Server to Snowflake offers advantages such as scalable cloud architecture with separate compute and storage, eliminating infrastructure management, and enabling seamless integration with modern data pipelines and analytics tools for improved performance and cost-efficiency.

DynamoDB to Redshift: 4 Best Methods
When you use different kinds of databases, there would be a need to migrate data between them frequently. A specific use case that often comes up is the transfer of data from your transactional database to your data warehouse such as transfer/copy data from DynamoDB to Redshift. This article introduces you to AWS DynamoDB and Redshift. It also provides 4 methods (with detailed instructions) that you can use to migrate data from AWS DynamoDB to Redshift.Loading Data From Dynamo DB To Redshift
Method 1: DynamoDB to Redshift Using LIKE.TG Data
LIKE.TG Data, an Automated No-Code Data Pipeline can transfer data from DynamoDB to Redshift and provide you with a hassle-free experience. You can easily ingest data from the DynamoDB database using LIKE.TG ’s Data Pipelines and replicate it to your Redshift account without writing a single line of code. LIKE.TG ’s end-to-end data management service automates the process of not only loading data from DynamoDB but also transforming and enriching it into an analysis-ready form when it reaches Redshift.
Get Started with LIKE.TG for Free
LIKE.TG supports direct integrations with DynamoDB and 150+ Data sources (including 40 free sources) and its Data Mapping feature works continuously to replicate your data to Redshift and builds a single source of truth for your business. LIKE.TG takes full charge of the data transfer process, allowing you to focus your resources and time on other key business activities.
Method 2: DynamoDB to Redshift Using Redshift’s COPY Command
This method operates on the Amazon Redshift’s COPY command which can accept a DynamoDB URL as one of the inputs. This way, Redshift can automatically manage the process of copying DynamoDB data on its own. This method is suited for one-time data transfer.
Method 3: DynamoDB to Redshift Using AWS Data Pipeline
This method uses AWS Data Pipeline which first migrates data from DynamoDB to S3. Afterward, data is transferred from S3 to Redshift using Redshift’s COPY command. However, it can not transfer the data directly from DynamoDb to Redshift.
Method 4: DynamoDB to Redshift Using Dynamo DB Streams
This method leverages the DynamoDB Streams which provide a time-ordered sequence of records that contains data modified inside a DynamoDB table. This item-level record of DynamoDB’s table activity can be used to recreate a similar item-level table activity in Redshift using some client application that is capable of consuming this stream. This method is better suited for regular real-time data transfer.
Methods to Copy Data from DynamoDB to Redshift
Copying data from DynamoDB to Redshift can be accomplished in 4 ways depending on the use case.Following are the ways to copy data from DynamoDB to Redshift:
Method 1: DynamoDB to Redshift Using LIKE.TG Data
Method 2: DynamoDB to Redshift Using Redshift’s COPY Command
Method 3: DynamoDB to Redshift Using AWS Data Pipeline
Method 4: DynamoDB to Redshift Using DynamoDB Streams
Each of these 4 methods is suited for the different use cases and involves a varied range of effort. Let’s dive in.
Method 1: DynamoDB to Redshift Using LIKE.TG Data
LIKE.TG Data, an Automated No-code Data Pipelinehelps you to directly transfer yourAWS DynamoDBdata toRedshiftin real-time in a completely automated manner. LIKE.TG ’s fully managed pipeline uses DynamoDB’sdata streamsto supportChange Data Capture (CDC)for its tables. LIKE.TG also facilitates DynamoDB’s data replication to manage the ingestion information viaAmazon DynamoDB StreamsAmazon Kinesis Data Streams.
Here are the 2 simple steps you need to use to move data from DynamoDB to Redshift using LIKE.TG :
Step 1) Authenticate Source: Connect your DynamoDB account as a source for LIKE.TG by entering a unique name for LIKE.TG Pipeline, AWS Access Key, AWS Secret Key, and AWS Region. This is shown in the below image.
Step 2) Configure Destination: Configure the Redshift data warehouse as the destination for your LIKE.TG Pipeline. You have to provide, warehouse name, database password, database schema, database port, and database username. This is shown in the below image.
That is it! LIKE.TG will take care of reliably moving data from DynamoDB to Redshift with no data loss.
Sign Up for a 14 day free Trial
Here are more reasons to try LIKE.TG :
Schema Management: LIKE.TG takes away the tedious task of schema management automatically detects the schema of incoming data and maps it to your Redshift schema.
Transformations: LIKE.TG provides preload transformations through Python code. It also allows you to run transformation code for each event in the data pipelines you set up. LIKE.TG also offers drag and drop transformations like Date and Control Functions, JSON, and Event Manipulation to name a few.
Incremental Data Load: LIKE.TG allows the transfer of data that has been modified in real-time. This ensures efficient utilization of bandwidth on both ends.
Live Monitoring: LIKE.TG allows you to monitor the data flow and check where your data is at a particular point in time.
With continuous real-time data movement, LIKE.TG allows you to combine Amazon DynamoDB data along with your other data sources and seamlessly load it to Redshift with a no-code, easy-to-setup interface.
Method 2: DynamoDB to Redshift Using Redshift’s COPY Command
This is by far the simplest way to copy a table from DynamoDB stream to Redshift. Redshift’s COPY command can accept a DynamoDB URL as one of the inputs and manage the copying process on its own. The syntax for the COPY command is as below.
copy <target_tablename> from 'dynamodb://<source_table_name>'
authorization
read ratio '<integer>';
For now, let’s assume you need to move product_details_v1 table from DynamoDB to Redshift (to a particular target table) named product_details_tgt. The command to move data will be as follows.
COPY product_details_v1_tgt from dynamodb://product_details_v1
credentials ‘aws_access_key_id = <access_key_id>;aws_secret_access_key=<secret_access_key>
readratio 40;
The “readratio” parameter in the above command specifies the amount of provisioned capacity in the DynamoDB instance that can be used for this operation. This operation is usually a performance-intensive one and it is recommended to keep this value below 50% to avoid the source database getting busy.
Limitations of Using Redshift’s Copy Command to Load Data from DynamoDB to Redshift
The above command may look easy, but in real life, there are multiple problems that a user needs to be careful about while doing this. A list of such critical factors that should be considered is given below.
DynamoDB and Redshift follow different sets of rules for their table names. While DynamoDB allows for the use of up to 255 characters to form the table name, Redshift limits it to 127 characters and prohibits the use of many special characters, including dots and dashs. In addition to that, Redshift table names are case-insensitive.
While copying data from DynamoDB to Redshift, Redshift tries to map between DynamoDB attribute names and Redshift column names. If there is no match for a Redshift column name, it is populated as empty or NULL depending on the value of EMPTYASNULL parameter configuration parameter in the COPY command.
All the attribute names in DynamoDB that cannot be matched to column names in Redshift are discarded.
At the moment, the COPY command only supports STRING and NUMBER data types in DynamoDB.
The above method works well when the copying operation is a one-time operation.
Method 3: DynamoDB to Redshift Using AWS Data Pipeline
AWS Data Pipeline is Amazon’s own service to execute the migration of data from one point to another point in the AWS Ecosystem. Unfortunately, it does not directly provide us with an option to copy data from DynamoDB to Redshift but gives us an option to export DynamoDB data to S3. From S3, we will need to used a COPY command to recreate the table in S3. Follow the steps below to copy data from DynamoDB to Redshift using AWS Data Pipeline:
Create an AWS Data pipeline from the AWS Management Console and select the option “Export DynamoDB table to S3” in the source option as shown in the image below. A detailed account of how to use the AWS Data Pipeline can be found in the blog post.
Once the Data Pipeline completes the export,use the COPY command with the source path as the JSON file location. The COPY command is intelligent enough to autoload the table using JSON attributes. The following command can be used to accomplish the same.
COPY product_details_v1_tgt from s3://my_bucket/product_details_v1.json credentials ‘aws_access_key_id = <access_key_id>;aws_secret_access_key=<secret_access_key> Json = ‘auto’
In the avove command, product_details_v1.json is the output of AWS Data Pipeline execution. Alternately instead of the “auto” argument, a JSON file can be specified to map the JSON attribute names to Redshift columns, in case those two are not matching.
Method 4: DynamoDB to Redshift Using DynamoDB Streams
The above methods are fine if the use case requires only periodic copying of the data from DynamoDB to Redshift. There are specific use cases where real-time syncing from DDB to Redshift is needed. In such cases, DynamoDB’s Streams feature can be exploited to design a streaming copy data pipeline.
DynamoDB Stream provides a time-ordered sequence of records that correspond to item level modification in a DynamoDB table. This item-level record of table activity can be used to recreate an item-level table activity in Redshift using a client application that can consume this stream. Amazon has designed the DynamoDB Streams to adhere to the architecture of Kinesis Streams. This means the customer just needs to create a Kinesis Firehose Delivery Stream to exploit the DynamoDB Stream data. The following are the broad set of steps involved in this method:
Enable DynamoDB Stream in the DynamoDB console dashboard.
Configure a Kinesis Firehose Delivery Stream to consume the DynamoDB Stream to write this data to S3.
Implement an AWS Lambda Function to buffer the data from the Firehose Delivery Stream, batch it and apply the required transformations.
Configure another Kinesis Data Firehose to insert this data to Redshift automatically.
Even though this method requires the user to implement custom functions, it provides unlimited scope for transforming the data before writing to Redshift.
Conclusion
The article provided you with 4 different methods that you can use to copy data from DynamoDB to Redshift. Since DynamoDB is usually used as a transactional database and Redshift as a data warehouse, the need to copy data from DynamoDB is very common.
If you’re interested in learning about the differences between the two, take a look at the article: Amazon Redshift vs. DynamoDB.
Depending on whether the use case demands a one-time copy or continuous sync, one of the above methods can be chosen. Method 2 and Method 2 are simple in implementation but come along with multiple limitations. Moreover, they are suitable only for one-time data transfer between DynamoDB and Redshift. The method using DynamoDB Streams is suitable for real-time data transfer, but a large number of configuration parameters and intricate details have to be considered for its successful implementation
LIKE.TG Data provides an Automated No-code Data Pipeline that empowers you to overcome the above-mentioned limitations. You can leverage LIKE.TG to seamlessly transfer data from DynamoDB to Redshift in real-time without writing a single line of code.
Learn more about LIKE.TG
Want to take LIKE.TG for a spin? Sign up for a 14-day free trial and experience the feature-rich LIKE.TG suite firsthand. Checkout the LIKE.TG pricing to choose the best plan for you.
Share your experience of copying data from DynamoDB to Redshift in the comment section below!

Google Sheets to BigQuery: 3 Ways to Connect & Migrate Data
As your company grows and starts generating terabytes of complex data, and you have data stored in different sources. That’s when you have to incorporate a data warehouse like BigQuery into your data architecture for migrating data from Google Sheets to BigQuery. Sieving through terabytes of data on sheets is quite a monotonous endeavor and places a ceiling on what is achievable when it comes to data analysis. At this juncture incorporating a data warehouse like BigQuery becomes a necessity.In this blog post, we will be covering extensively how you can move data from Google Sheets to BigQuery.
Methods to Connect Google Sheets to BigQuery
Now that we have built some background information on the spreadsheets and why it is important to incorporate BigQuery into your data architecture, next we will look at how to import data. Here, it is assumed that you already have a GCP account. If you don’t already have one, you can set it up. Google offers new users $300 free credits for a year. You can always use these free credits to get a feel of GCP and access BigQuery.
Method 1: Using LIKE.TG to Move Data from Google Sheets to BigQuery
LIKE.TG is the only real-time ELT No-code data pipeline platform that cost-effectively automates data pipelines that are flexible to your needs.
Using a fully managed platform likeLIKE.TG you bypass all the aforementioned complexities and (supports as a free data source) import Google Sheet to BigQuery in just a few mins. You can achieve this in 2 simple steps:
Step 1: Configure Google Sheets as a source, by entering the Pipeline Name and the spreadsheet you wish to replicate.
Step 2:Connect to your BigQuery account and start moving your data from Google Sheets to BigQuery by providingthe project ID, dataset ID, Data Warehouse name, and GCS bucket.
For more details, Check out:
Google Sheets Source Connector
BigQuery Destinations Connector
Key features of LIKE.TG are,
Data Transformation:It provides a simple interface to perfect, modify, and enrich the data you want to transfer.
Schema Management:LIKE.TG can automatically detect the schema of the incoming data and maps it to the destination schema.
Incremental Data Load: LIKE.TG allows the transfer of data that has been modified in real-time. This ensures efficient utilization of bandwidth on both ends.
Method 2: Using BigQuery Connector to Move Data from Google Sheets to BigQuery
You can easily upload using BigQuery’s data connector. The steps below illustrate how:
Step 1: Log in to your GCP console and Navigate to the BigQuery UI using the hamburger menu.
Step 2: Inside BigQuery, select ‘Create Dataset’.
Step 3: After creating the dataset, next up we create a BigQuery table that will contain our incoming data from sheets.To create BigQuery table from Google Sheet, click on ‘Create a table.’ In the ‘create a table‘ tab, select Drive.
Step 4: Under the source window, choose Google Drive as your source and populate the Select Drive URL tab with the URL from your Google Sheet. You can select either CSV or Sheets as the format. Both formats allow you to select the auto-detect schema. You could also specify the column names and data types.
Step 5: Fill in the table name and select ‘Create a table.’ With your Google Sheets linked to your Google BigQuery, you can always commit changes to your sheet and it will automatically appear in Google BigQuery.
Step 6: Now that we have data in BigQuery, we can perform SQL queries on our ingested data. The following image shows a short query we performed on the data in BigQuery.
Method 3: Using Sheets Connector to Move Data from Google Sheets to BigQuery
This method to upload Google Sheet to BigQuer is only available for Business, Enterprise, or Education G Suite accounts. This method allows you to save your SQL queries directly into your Google Sheets. Steps to using the Sheet’s data connector are highlighted below with the help of a public dataset:
Step 1: For starters, open or create a Google Sheets spreadsheet.
Step 2: Next, click on Data > Data Connectors > Connect to BigQuery.
Step 3: Click Get Connected, and select a Google Cloud project with billing enabled.
Step 4: Next, click on Public Datasets. Type Chicago in the search box, and then select the Chicago_taxi_trips dataset. From this dataset choose the taxi_trips table and then click on the Connect button to finish this step.
This is what your Google Sheets spreadsheet will look like:
You can now use this spreadsheet to create formulas, charts, and pivot tables using various Google Sheets techniques.
Managing Access and Controlling Share Settings
It is pertinent that your data is protected across both Sheet and BigQuery, hence you can manage who has access to both the sheet and BigQuery. To do this; all you need to do is create a Google Group to serve as an access control group.
By clicking the share icon on sheets, you can grant access to which of your team members can edit, view or comment.
Whatever changes are made here will also be replicated on BigQuery.
This will serve as a form of IAM for your data set.
Limitations of using Sheets Connector to Connect Google Sheets to BigQuery
In this blog post, we covered how you can incorporate BigQuery into Google Sheets in two ways so far. Despite the immeasurable benefits of the process, it has some limitations.
This process cannot support volumes of data greater than 10,000 rows in a single spreadsheet.
To make use of the sheets data connector for BigQuery, you need to operate a Business, Enterprise, or Education G suite account. This is an expensive option.
Before wrapping up, let’s cover some basics.
Introduction to Google Sheets
Spreadsheets are electronic worksheets that contain rows and columns which users can input, manage and carry out mathematical operations on their data. It gives users the unique ability to create tables, charts, and graphs to perform analysis.
Google Sheets is a spreadsheet program that is offered by Google as a part of their Google Docs Editor suite. This suite also includes Google Drawings, Google Slides, Google Forms, Google Docs, Google Keep, and Google Sites.
Google Sheets gives you the option to choose from a vast variety of schedules, budgets, and other pre-made spreadsheets that are designed to make your work that much better and your life easier.
Here are a few key features of Google Sheets
In Google Sheets, all your changes are saved automatically as you type. You can use revision history to see old versions of the same spreadsheet. It is sorted by the people who made the change and the date.
It also allows you to get instant insights with its Explore panel. It allows you to get an overview of data from a selection of pre-populated charts to informative summaries to choose from.
Google Sheets allows everyone to work together in the same spreadsheet at the same time.
You can create, access, and edit your spreadsheets wherever you go- from your tablet, phone, or computer.
Introduction to BigQuery
Google BigQuery is a data warehouse technology designed by Google to make data analysis more productive by providing fast SQL-querying for big data. The points below reiterate how BigQuery can help improve our overall data architecture:
When it comes to Google BigQuery size is never a problem. You can analyze up to 1TB of data and store up to 10GB for free each month.
BigQuery gives you the liberty to focus on analytics while fully abstracting all forms of infrastructure, so you can focus on what matters.
Incorporating BigQuery into your architecture will open you to the services on GCP(Google Cloud Platform). GCP provides a suite of cloud services such as data storage, data analysis, and machine learning.
With BigQuery in your architecture, you can apply Machine learning to your data by using BigQuery ML.
If you and your team are collaborating on google sheets you can make use of Google Data Studio to build interactive dashboards and graphical rendering to better represent the data. These dashboards are updated as data is updated on the spreadsheet.
BigQuery offers a strong security regime for all its users. It offers a 99.9% service level agreement and strictly adheres to privacy shield principles. GCP provides its users with Identity and Access Management (IAM), where you as the main user can decide the specific data each member of your team can access.
BigQuery offers an elastic warehouse model that scales automatically according to your data size and query complexity.
Additional Resources on Google Sheets to Bigquery
Move Data from Excel to Bigquery
Conclusion
This blog talks about the 3 different methods you can use to move data from Google Sheets to BigQuery in a seamless fashion.
In addition to Google Sheets, LIKE.TG can move data from a variety ofFree Paid Data Sources(Databases, Cloud Applications, SDKs, and more).
LIKE.TG ensures that your data is consistently and securely moved from any source to BigQuery in real-time.

How to Migrate from MariaDB to MySQL in 2 Easy Methods
MariaDB and MySQL are two widely popular relational databases that boast many of the largest enterprises as their clientele. Both MariaDB and MySQL are available in two versions – A community-driven version and an enterprise version. However, the distribution of features and development processes in the community and enterprise versions of MySQL and MariaDB differ from each other.
Even though MariaDB claims itself as a drop-in replacement for MySQL, because of the terms of licensing and enterprising support contracts, many organizations migrate between these two according to their policy changes. This blog post will cover the details of how to move data from MariaDB to MySQL.
What is MariaDB?
MariaDB is a RDBMS built on SQL, created by the professionals behind the development of MySQL intended to provide technical efficiency and versatility.
You can use this database for many use cases, which include data warehousing, and managing your data.
Its relational nature will be helpful for you. And, the open-source community will provide you with the resources required.
What is MySQL?
MySQL is one of the renowned open source relational database management systems. You can store and arrange data in structured formats in tables with columns and rows.
You can define, query, manage, and manipulate your data using SQL. You can use MySQL to develop websites, and applications.
Examples of companies who used this are Uber, Airbnb, Pinterest, and Shopify. They use MySQL for their database management requirements because of its versatility and capabilities to in manage large operations.
Methods to Integrate MariaDB with MySQL
Method 1: Using LIKE.TG Data to Connect MariaDB to MySQL
A fully managed, No-Code Data Pipeline platform like LIKE.TG Data allows you to seamlessly migrate your data from MariaDB to MySQL in just two easy steps. No specialized technical expertise is required to perform the migration.
Method 2: Using Custom Code to Connect MariaDB to MySQL
Use mysqldump to migrate your data from MariaDB to MySQL by writing a couple of commands mentioned in the blog. However this is a costly operation that can also overload the primary database.
Method 3: Using MySQL Workbench
You can also migrate your data from MariaDB to MySQL using the MySQL Migration Wizard. However, it has limitations on the size of migrations that it can handle effectively, and as a result, it cannot handle very large datasets.
Get Started with LIKE.TG for Free
Method 1: Using LIKE.TG Data to Connect MariaDB to MySQL
The steps involved are,
Step 1: Configure MariaDB as Source
Step 2: Configure MySQL as Destination
Check out why LIKE.TG is the Best:
Schema Management: LIKE.TG takes away the tedious task of schema management automatically detects the schema of incoming data and maps it to the destination schema.
Incremental Data Load: LIKE.TG allows the transfer of data that has been modified in real-time. This ensures efficient utilization of bandwidth on both ends.’
Data Transformation:It provides a simple interface to perfect, modify, and enrich the data you want to transfer.
Secure: LIKE.TG has a fault-tolerant architecture that ensures that the data is handled in a secure, consistent manner with zero data loss.
Get Started with LIKE.TG for Free
Method 2: Using Custom Code to Connect MariaDB to MySQL
Since both databases provide the same underlying tools, it is very easy to copy data from MariaDB to MySQL. The following steps detail how to accomplish this.
Step 1: From the client machine, use the below command to create a complete dump of the database in MariaDB. mysqldump -u username -p database_name > source_dump.sql This command creates a source_dump.sql file.
Step 2: Move the file to a machine that can access the target MySQL database. If the same machine has access to the target database, this step is not relevant.
Step 3: Log in as root to the target MySQL database mysql -u root -p password
Step 4: In the MySQL shell, execute the below command to create a database. CREATE DATABASE target_database;Where target_database is the name of the database to which data is to be imported.
Step 5: Exit the MySQL shell and go to the location where the source_dump.sql is stored.
Step 6: Execute the below command to load the database from the dump file. mysql -u username -p new_database < source_dump.sql
That concludes the process. The target database is now ready for use and this can be verified by logging in to the MySQL shell and executing a SHOW TABLES command. Even though this approach provides a simple way for a one-off copy operation between the two databases, this method has a number of limitations. Let’s have a look at the limitations of this approach.
MariaDB to MySQL: Limitations of Custom Code Approach
In most cases, the original database will be online while the customer attempts to copy the data. mysqldump command is a costly execution and can lead to the primary database being unavailable or slow during the process.
While the mysqldump command is being executed, new data could come in resulting in some leftover data. This data needs to be handled separately.
This approach works fine if the copying operation is a one-off process. In some cases, organizations may want to maintain an exact running replica of MariaDB in MySQL and then migrate. This will need a complex script that can use the binary logs to create a replica.
Even though MariaDB claims itself as a drop-in replacement, the development has been diverging now and there are many incompatibilities between versions as described here. This may lead to problems while migrating using the above approach.
Migrate from MariaDB to MySQLGet a DemoTry itMigrate from MariaDB to PostgreSQLGet a DemoTry it
Method 3: Using MySQL Workbench
In MySQL Workbench, navigate yourself to Database> Migrate to initiate the migration wizard.
Go to Overview page -> select Open ODBC Manager. This is done to make sure the ODBC drive for MySQL Server is installed. If not, useMySQL installer used to install MySQL Workbench for installing it. Select Start Migration.
Click and specify details on source database -> test the connection -> select Next.
Configure the target database details and verify connection.
Get the wizard extracting the schema list from the source server -> select the schema for migrating.
The migration will begin once you mention the objects you want to migrate on the Source Objects page.
Make edits in the generated SQL for all objects -> edit migration issues, or change the name of the target object and columns on the View drop-down of Manual Edit.
Go to the next page -> choose create schema in target RDBMS -> Give it sometime to finish the creation. And check the created objects on the Create Target Results page.
In the Data Transfer Settings page, configure data migration -> Select Next to move your data.
Check the migration report after the process -> select Finish to close the wizard.
You can check the consistency of source data and schema by logging into the target database.
Also, check if the table and row counts match.
SELECT COUNT (*) FROM table_name;
Get MySQL row count of tables in your database.
SELECT
table_name,
table_rows
FROM
information_schema.tables
WHERE
table_schema = 'classicmodels'
ORDER BY table_name;
14. Check the database size.
SELECT TABLE_SCHEMA AS `Database`,
TABLE_NAME AS `Table`,
ROUND((DATA_LENGTH + INDEX_LENGTH) / 1024 / 1024) AS `Size (MB)`
FROM information_schema.TABLES
GROUP BY table_schema;
Understand the size of the table.
SELECT table_name AS "Table",
ROUND(((data_length + index_length) / 1024 / 1024), 2) AS "Size (MB)"
FROM information_schema.TABLES
WHERE table_schema = "database_name"
ORDER BY (data_length + index_length) DESC;
Limitations of using MySQL Workbench to Migrate MariaDB to MySQL:
Size Constraints: MySQL workbench has limitations on the size of migrations that it can handle effectively. It cannot be used for very large databases.
Limited Functionality: It cannot deal with complex data structures efficiently. It requires manual interventions or additional tools to do so when using MySQL workbench.
Use Cases of MariaDB to MySQL Migration
MySQL is suitable for heavily trafficked websites and mission-critical applications. MySQL can handle terabyte-sized databases and also supports high-availability database clustering. When you migrate MariaDB to MySQL, you can manage databases of websites and applications with high traffic. Popular applications that use the MySQL database include TYPO3, MODx, Joomla, WordPress, phpBB, MyBB, and Drupal.
MySQL is one of the most popular transactional engines for eCommerce platforms. Thus, when you convert MariaDB to MySQL, it becomes easy to use to manage customer data, transactions, and product catalogs.
When you import MariaDB to MySQL, it assists you in fraud detection. MySQL helps to analyze transactions, claims etc. in real-time, along with trends or anomalous behavior to prevent fraudulent activities.
Learn More About:
How to Migrate MS SQL to MySQL in 3 Methods
Migrate Postgres to MySQL
Connecting FTP to MySQL
Conclusion
This blog explained two methods that you can use to import MariaDB to MySQL.
The manual custom coding method provides a simple approach for a one-off migration between MariaDB and MySQL.
Among the methods provided, determining which method is to be used depends on your use case.
You can go for an automated data pipeline platform if you want continuous or periodic copying operations.
Sign Up for a 14-day free trial
FAQ on MariaDB to MySQL
How do I switch from MariaDB to MySQL?
You can transfer your data from MariaDB to MySQL using custom code or automated pipeline platforms like LIKE.TG Data.
How to connect MariaDB to MySQL?
You can do this by using custom codes. The steps include:1. Create a Dump of MariaDB2. Log in to MySQL as a Root User3. Create a MySQL Database4. Restore the Data5. Verify and Test
How to upgrade MariaDB to MySQL?
Upgrading from MariaDB to MySQL would involve fully backing the MariaDB databases. Afterward, uninstall MariaDB, install MySQL, and restore from the created backup. Be sure that the MySQL version supports all features used in your setup.
Is MariaDB compatible with MySQL?
MariaDB’s data files are generally binary compatible with those from the equivalent MySQL version.

Best 12 Data Integration Tools Reviews 2024
Choosing the right data integration tool can be tricky, with many options available today. If you’re not clear on what you need, you might end up making the wrong choice.That’s why it’s crucial to have essential details and information, such as what factors to consider and how to choose the best data integration tools, before making a decision.
In this article, I have compiled a list of 15 tools to help you choose the correct data integration tool that meets all your requirements.
You’ll also learn about the benefits of these tools and the key factors to consider when selecting these tools.
Let’s dive in!
Understanding Data Integration
Data integration is merging data from diverse sources to create a cohesive, comprehensive dataset that gives you a unified view. By consolidating data across multiple sources, your organization can discover insights and patterns that might remain hidden while examining data from individual sources alone.
List of 15 Best Data Integration Tools in 2024
With such a large number of products on the market, finding the right Data Integration Tools for a company’s needs can be tough. Here’s an overview of seven of the most popular and tried-out Database Replication solutions. These are the top Data Integration Tools used widely in the market today.
1. LIKE.TG Data
With LIKE.TG , you get a growing library of over 150 plug-and-play connectors, including all your SaaS applications, databases, and file systems. You can also choose from destinations like Snowflake, BigQuery, Redshift, Databricks, and Firebolt. Data integrations are done effortlessly in near real-time with an intuitive, no-code interface. It is scalable and cost-effectively automates a data pipeline, ensuring flexibility to meet your needs.
Key features of LIKE.TG Data
LIKE.TG ensures zero data loss, always keeping your data intact.
It lets you monitor your workflow and stay in control with enhanced visibility and reliability to identify and address issues before they escalate.
LIKE.TG provides you with 24/7 Customer Support to ensure you enjoy round-the-clock support when needed.
With LIKE.TG , you have a reliable tool that lets you worry less about the data integration and helps you focus more on your business. Check LIKE.TG ’s in-depth documentation to learn more.
Pricing at LIKE.TG Data
LIKE.TG offers you with three simple and transparent pricing models, starting with the free plan which lets you ingest up to 1 million records.
The Best-Suited Use Case for LIKE.TG Data
If you are looking for advanced capabilities in automated data mapping and efficient change data capture, LIKE.TG is the best choice.
LIKE.TG has great coverage, they keep their integrations fresh, and the tool is super reliable and accessible. The team was very responsive as well, always ready to answer questions and fix issues. It’s been a great experience!
– Prudhvi Vasa, Head of Data, Postman
Experience LIKE.TG : A Top Data Integration Tool for 2024
Feeling overwhelmed by the ever-growing list of data integration tools? Look no further! While other options may seem complex or limited, LIKE.TG offers a powerful and user-friendly solution for all your data needs.
Get Started with LIKE.TG for Free
2. Dell Boomi
Dell provides a cloud-based integration tool called Dell Boomi, this tool empowers your business to effortlessly integrate between applications, partners and customers through an intuitive visual designer and a wide array of pre-configured components. Boomi simplifies and supports ongoing integration and development task between multiple endpoints, irrespective of your organization’s size.
Key Features of Dell Boomi
Whether you’re an SMB or a large company, you can use this tool to support several application integrations as a service.
With Dell Boomi, you can access a variety of integration and data management capabilities, including private-cloud, on-premise, and public-cloud endpoint connectors and robust ETL support.
The tool allows your business to manage Data Integration in a central place via a unified reporting portal.
Pricing at Dell Boomi
Whether you’re an SMB or an Enterprise, Boomi offers you with easily understandable, flexible, and transparent pricing starting with basic features and ranging to advanced requirements.
The Best-Suited Use Case for Dell Boomi
Dell Boomi is a wise choice for managing and moving your data through hybrid IT architectures.
3. Informatica PowerCenter
Informatica is a software development company that specializes in Data Integration. It provides ETL, data masking, data quality, data replication, data virtualization, master data management, and other services. You can connect it to and fetch data from a variety of heterogeneous sources and perform data processing.
Key Features of Informatica PowerCenter
You can manage and monitor your data pipelines with ease quickly identify and address any issues that might arise.
You can ensure high data quality and accuracy using data cleansing, profiling, and standardization.
It runs alongside an extensive catalog of related products for big data integration, cloud application integration, master data management, data cleansing, and other data management functions.
Pricing at Informatica PowerCenter
Informatica offers flexible, consumption-based pricing model enabling you to pay for what you need. For further information, you can contact their sales team.
The Best-Suited Use Case for Informatica PowerCenter
Powercenter is a good choice if you have to deal with many legacy data sources that are primarily on-premise.
4. Talend
Talend is an ETL solution that includes data quality, application integration, data management, data integration, data preparation, and big data, among other features. Talend, after retiring its open-source version of Talend Studio, has joined hands with Qlik to provide free and paid versions of its data integration platform. They are committed to delivering updates, fixes, and vulnerability patches to ensure the platform remains secure and up-to-date.
Key Features of Talend
Talend also offers a wide array of services for advanced Data Integration, Management, Quality, and more. However, we are specifically referring to Talend Open Studio here.
Your business can install and build a setup for both on-premise and cloud ETL jobs using Spark, Hadoop, and NoSQL Databases.
To prepare data, your real-time team collaborations are permitted.
Pricing at Talend
Talend provides you with ready-to-query schemas, and advanced connectivity to improve data security included in its basic plan starting at $100/month.
The Best-Suited Use Case for Talend
If you can compromise on real-time data availability to save on costs, consider an open-source batch data migration tool like Talend.
5. Pentaho
Pentaho Data Integration (PDI) provides you with ETL capabilities for obtaining, cleaning, and storing data in a uniform and consistent format. This tool is extremely popular and has established itself as the most widely used and desired Data Integration component.
Key Features of Pentaho
Pentaho Data Integration (PDI) is known for its simple learning curve and simplicity of usage.
You can use Pentaho for multiple use cases that it supports outside of ETL in a Data Warehouse, such as database replication, database to flat files, and more.
Pentaho allows you to create ETL jobs on a graphical interface without writing code.
Pricing at Pentaho
Pentaho has a free, open-source version and a subscription-based enterprise model. You can contact the sales team to learn the details about the subscription-based model.
The Best-Suited Use Case for Pentaho
Since PDI is open-source, it’s a great choice if you’re cost-sensitive. Pentaho, as a batch data integration tool, doesn’t support real-time data streaming.
6. AWS Glue
AWS Glue is a robust data integration solution that excels in fully managed, cloud-based ETL processes on the Amazon Web Services (AWS) platform. Designed to help you discover, prepare, and combine data, AWS Glue simplifies analytics and machine learning.
Key Features of the AWS Glue
You don’t have to write the code for creating and running ETL jobs, this can be done simply by using AWS Glue Studio.
Using AWS Glue, you can execute serverless ETL jobs. Also, other AWS services like S3, RDS, and Redshift can be integrated easily.
Your data sources can be crawled and catalogued automatically using AWS Glue.
Pricing at AWS Glue
For AWS Glue the pay you make is hourly and the billing is done every second. You can request them for pricing quote.
The Best-Suited Use Case for AWS Glue
AWS Glue is a good choice if you’re looking for a fully managed, scalable and reliable tool involving cloud-based data integrations.
7. Microsoft Azure Data Factory
Azure Data Factory is a cloud-based ETL and data integration service that allows you to create powerful workflows for moving and transforming data at scale. With Azure Data Factory, you can easily build and schedule data-driven workflows, known as pipelines, to gather data from various sources.
Key Features of the Microsoft Azure Data Factory
Data Factory offers a versatile integration and transformation platform that seamlessly supports and speeds up your digital transformation project using intuitive, code-free data flows.
Using built-in connectors, you can ingest all your data from diverse and multiple sources.
SQL Server Integration Services (SSIS) can be easily rehosted to build code-free ETL and ELT pipelines with built-in Git, supporting continuous integration and continuous delivery (CI/CD).
Pricing at Microsoft Azure Data Factory
Azure provides a consumption based pricing model, you can estimate your specific cost by using Azure Pricing Calculator available on the its website.
The Best-Suited Use Case for the Microsoft Azure Data Factory
Azure Data Factory is designed to automate and coordinate your data workflows across different sources and destinations.
8. IBM Infosphere Data Stage
IBM DataStage is an enterprise-level data integration tool used to streamline your data transfer and transformation tasks. Data integration using ETL and ELT methods, along with parallel processing and load balancing is supported ensuring high performance.
Key Features of IBM Infosphere Data Stage
To integrate your structured, unstructured, and semi-structured data, you can use Data Stage.
The platform provides a range of data quality features for you, including data profiling, standardization, matching, enhancement, and real-time data quality monitoring.
By transforming large volumes of raw data, you can extract high-quality, usable information and ensure consistent and assimilated data for efficient data integrations.
Pricing at IBM Infosphere Data Stage
Data Stage offers free trial and there after you can contact their sales team to obtain the pricing for license and full version.
The Best-Suited Use Case for IBM Infosphere Data Stage
IBM Infosphere DataStage is recommended for you as the right integration tool because of its parallel processing capabilities it can handle large-scale data integrations efficiently along with enhancing performance.
9. SnapLogic
SnapLogic is an integration platform as a service (iPaaS) that offers fast integration services for your enterprise. It comes with a simple, easy-to-use browser-based interface and 500+ pre-built connectors. With the help of SnapLogic’s Artificial Intelligence-based assistant, a person like you from any line of business can effortlessly integrate the two platforms using the click-and-go feature.
Key Features of SnapLogic
SnapLogic offers reporting tools that allow you to view the ETL job progress with the help of graphs and charts.
It provides the simplest user interface, enabling you to have self-service integration. Anyone with no technical knowledge can integrate the source with the destination.
SnapLogic’s intelligent system detects any EDI error, instantly notifies you, and prepares a log report for the issue.
Pricing at SnapLogic
SnapLogics’s pricing is based on the package you select and the configuration that you want with unlimited data flow. You can discuss the pricing package with their team.
The Best-Suited Use Case for SnapLogic
SnapLogic is an easy-to-use data integration tool that is best suited for citizen integrators without technical knowledge.
10. Jitterbit
Jitterbit is a harmony integration tool that enables your enterprise to establish API connections between apps and services. It supports cloud-based, on-premise, and SaaS applications. Along with Data Integration tools, you are offered AI features that include speech recognition, real-time language translation, and a recommendation system. It is called the Swiss Army Knife of Big Data Integration Platforms.
Key Features of Jitterbit
Jitterbit offers a powerful Workflow Designer that allows you to create new integration between two apps with its pre-built data integration tool templates.
It comes with an Automapper that can help you map similar fields and over 300 formulas to make the transformation task easier.
Jitterbit provides a virtual environment where you can test integrations without disrupting existing ones.
Pricing at Jitterbit
Jitterbit offers you with three pricing models: Standard, Professional and Enterprise, all need an yearly subscription, and the quote can be discussed with them.
The Best-Suited Use Case for Jitterbit
Jitterbit is an Enterprise Integration Platform as a Service (EiPaaS) that you can use to solve complex integrations quickly.
11. Zigiwave
Zigiwave is a Data Integration Tool for ITSM, Monitoring, DevOps, Cloud, and CRM systems. It can automate your workflow in a matter of few clicks as it offers a No-code interface for easy-to-go integrations. With its deep integration features, you can map entities at any level. Zigiwave smart data loss prevention protects data during system downtime.
Key Features of Zigiwave
Zigiwave acts as an intermediate between your two platforms and doesn’t store any data, which makes it a secure cloud Data Integration platform.
Zigiwave synchronizes your data in real-time, making it a zero-lag data integration tool for enterprises.
It is highly flexible and customizable and you can filter and map data according to your needs.
Pricing at Zigiwave
You can get a 30-day free trial at Zigiwave and can book a meeting with them to discuss the pricing.
The Best-Suited Use Case for Zigiwave
It is best suited if your company has fewer resources and wants to automate operations with cost-effective solutions.
12. IRI Voracity
IRI Voracity is an iPaaS Data Integration tool that can connect your two apps with its powerful APIs. It also offers federation, masking, data quality, and MDM integrations. Its GUI workspace is designed on Eclipse to perform integrations, transformations, and Hadoop jobs. It offers other tools that help you understand and track data transfers easily.
Key Features of IRI Voracity
IRI Voracity generates detailed reports for ETL jobs that help you track all the activities and log all the errors.
It also enables you to directly integrate their data with other Business Analytics and Business Intelligence tools to help analyze your data in one place.
You can transform, normalize, or denormalize your data with the help of a GUI wizard.
Pricing at IRI Voracity
IRI Voracity offers you their pricing by asking for a quote.
The Best-Suited Use Case for IRI Voracity
If you’re familiar with Eclipse-based wizards and need the additional features of IRI Voracity Data Management, IRI Voracity, an Eclipse GUI-based data integration platform, is ideal for you.
13. Oracle Data Integrator
Oracle Data Integrator is one of the most renowned Data Integration providers, offering seamless data integration for SaaS and SOA-enabled data services. It also offers easy interoperability with Oracle Warehouse Builder (OWB) for enterprise users like yourself. Oracle Data Integrator provides GUI-based tools for a faster and better user experience.
Key Features of Oracle Data Integrator
It automatically detects faulty data during your data loading and transforming process and recycles it before loading it again.
It supports all RDBMSs, such as Oracle, Exadata, Teradata, IBM DB2, Netezza, Sybase IQ, and other file technologies, such as XML and ERPs.
Its unique ETL architecture offers you greater productivity with low maintenance and higher performance for data transformation.
Pricing at Oracle Data Integrator
Though it is a free Open-Source platform, you can get Oracle Data Integrator Enterprise Editions Licence at $900 for a named user plus licence with $198 for software update registration support, and $30,000 for Processor Licence with $6,600 for software update licence support.
The Best-Suited Use Case for Oracle Data Integrator
The unique ETL architecture of Oracle Data Integrator eliminates the dedicated ETL servers, which reduces its hardware and software maintenance costs. So it’s best for your business if you want cost-effective data integration technologies.
14. Celigo
Celigo is an iPaaS Data Integration tool with a click-and-go feature. It automates most of your workflow for data extraction and transformation to destinations. It offers many pre-built connectors, including most Cloud platforms used in the industry daily. Its user-friendly interface enables technical and non-technical users to perform data integration jobs within minutes.
Key Features of Celigo
Celigo offers a low-code GUI-based Flow Builder that allows you to build custom integrations from scratch.
It provides an Autopilot feature with inegrator.io that allows you to automate most workflow with the help of pattern recognition AI.
Using Celigo, developers like you can create and share your stacks and generate tokens for direct API calls for complex flow logic to build integrations.
Pricing at Celigo
Celigo offers four pricing plans: Free trail plan with 2 endpoint apps, Professional with 5 endpoint apps, Premium with 10 endpoint apps and Enterprise with 20 endpoint apps. Their prices can be known by contacting them.
The Best-Suited Use Case for Celigo
It is perfect if you want to automate most of your data integration workflow and have no coding knowledge.
15. MuleSoft Anypoint Platform
MuleSoft Anypoint Platform is a unified iPaaS Data Integration tool that helps your company establish a connection between two cloud-based apps or a cloud or on-premise system for seamless data synchronization. It stores the data stream from data sources locally and on the Cloud. To access and transform your data, you can use the MuleSoft expression language.
Key Features of the MuleSoft Anypoint Platform
It offers mobile support that allows you to manage your workflow and monitor tasks from backend systems, legacy systems, and SaaS applications.
MuleSoft can integrate with many enterprise solutions and IoT devices such as sensors, medical devices, etc.
It allows you to perform complex integrations with pre-built templates and out-of-box connectors to accelerate the entire data transfer process.
Pricing at MuleSoft Anypoint Platform
Anypoint Integration Starter is the starting plan which lets you manage, design and deploy APIs and migrations and you can get the quote at request.
The Best-Suited Use Case for the MuleSoft Anypoint Platform
When your company needs to connect to many information sources, in public and private clouds and wants to access outdated system data, this integrated data platform is the best solution.
What Factors to Consider While Selecting Data Integration Tools?
While picking the right Data Integration tool from several great options out there, it is important to be wise enough. So, how would you select the best data integration platform for your use case? Here are some factors to keep in mind:
Data Sources Supported
Scalability
Security and Compliance
Real-Time Data Availability
Data Transformations
1) Data Sources Supported
As your business grows, the complexity of the Data Integration strategy will grow. Take note that there are many streams and web-based applications, and data sources that are being added to your business suit daily by different teams.
Hence, it is important to choose a tool that could grow and can accommodate your expanding list of data sources as well.
2) Scalability
Initially, the volume of the data you need for your Data Integration software could be less. But, as your business scales, you will start capturing every touchpoint of your customers, exponentially growing the volume of data that your data infrastructure should be capable of handling.
When you choose your Data Integration tool, ensure that the tool can easily scale up and down as per your data needs.
3) Security and Compliance
Given you are dealing with mission-critical data, you have to make sure that the solution offers the expertise and the resources needed to ensure that you are covered when it comes to security and compliance.
4) Real-Time Data Availability
This is applicable only if you are use case is to bring data to your destination for real-time analysis. For many companies – this is the primary use case. Not all Data Integration solutions support this. Many bring data to the destination in batches – creating a lag of anywhere between a few hours to days.
5) Data Transformations
The data that is extracted from different applications is in different formats. For example, the date represented in your database can be in epoch time whereas another system has the date in “mm-dd-yy”.
To be able to do meaningful analysis, companies would want to bring data to the destination in a common format that makes analysis easy and fast. This is where Data transformation comes into play. Depending on your use case, pick a tool that enables seamless data transformations.
Benefits of Data Integration Tools
Now that you have your right tool based on your use case, it is time to learn how are they beneficial for your business. The benefits range from:
Improved Decision-Making
Since the raw data is now converted into usable information and data is present in a consolidated form, your decisions based on that information will be faster and more accurate.
Automated Business Processes
Using these tools your data integration task becomes automated, which leaves you and your team with more time to focus on business development related activities.
Reduced Costs
By utilizing these tools the integration processes are automated, so, manual efforts and errors are significantly reduced, therefore reducing the overall cost.
Improved Customer Service
You deliver more personalized customer support and it becomes efficient as you can now have a comprehensive customer report which will help you understand their needs.
Enhanced Compliance and Security
These tools make sure that the data handled follows proper regulatory standards and any of your sensitive information is protected.
Increased Agility and Collaboration
You can easily share your data and collaborate across departments without any interruptions which boosts the datas overall agility and responsiveness.
Learn more about:
Top 7 Free Open-source ETL Tools
AWS Integration Strategies
Conclusion
This article provided you with a brief overview of Data Integration and Data Integration Tools, along with the factors to consider while choosing these tools. You are now in the position to choose the best Data Integration tools based on your requirements.
Now that you have an idea of how to go about picking a Data Integration Tool, let us know your thoughts/questions in the comments section below.
FAQ on Data Integration Tools
What are the main features to look for in a data integration tool?
The main features to look for in a data integration tool are the data sources it supports, its scalability, the security and compliance it follows, real-time data availability, and last but not the least, the data transformations it provides.
How do data integration tools enhance data security?
The data integration tools enhance data security by following proper regulatory standards and protecting your sensitive information.
Can data integration tools handle real-time data?
Integration tools like LIKE.TG Data, Talend, Jitterbit, and Zigiwave can handle real-time data.
What are the cost considerations for different data integration tools?
Cost consideration for different data integration tools include your initial licensing and subscription fees, along with the cost to implement and setup that tool followed by maintenance and support.
How do I choose between open-source and proprietary tools?
While choosing between open-source and proprietary tools you consider relevant factors, such as business size, scalability, available budget, deployment time and reputation of the data integration solution partner.

Salesforce to MySQL Integration: 2 Easy Methods
While Salesforce provides its analytics capabilities, many organizations need to synchronize Salesforce data into external databases like MySQL for consolidated analysis. This article explores two key methods for integrating Salesforce to MySQL: ETL pipeline and Custome Code. Read on for an overview of both integration methods and guidance on choosing the right approach.Methods to Set up Salesforce to MySQL Integration
Method 1: Using LIKE.TG Data to Set Up Salesforce to MySQL Integration
LIKE.TG Data, a No-code Data Pipeline platform helps you to transfer data from Salesforce (among 150+ Sources) to your desired destination like MySQL in real-time, in an effortless manner, and for free. LIKE.TG with its minimal learning curve can be set up in a matter of minutes making the user ready to perform operations in no time instead of making them repeatedly write the code.
Sign up here for a 14-day Free Trial!
Method 2: Using Custom Code to Set Up Salesforce to MySQL Integration
You can follow the step-by-step guide for connecting Salesforce to MySQL using custom codes. This approach uses Salesforce APIs to achieve this data transfer. Additionally, it will also highlight the limitations and challenges of this approach.
Methods to Set up Salesforce to MySQL Integration
You can easily connect your Salesforce account to your My SQL account using the following 2 methods:
Method 1: Using LIKE.TG Data to Set Up Salesforce to MySQL Integration
LIKE.TG Data takes care of all your data preprocessing needs and lets you focus on key business activities and draw a much more powerful insight on how to generate more leads, retain customers, and take your business to new heights of profitability.
It provides a consistent reliable solution to manage data in real-time and always has analysis-ready data in your desired destination.
LIKE.TG can integrate data from Salesforce to MySQL in just 2 simple steps:
Authenticate and configure your Salesforce data source as shown in the below image. To learn more about this step, visit here.
Configure your MySQL destination where the data needs to be loaded, as shown in the below image. To learn more about this step, visit here.
Method 2: Using Custom Code to Set Up Salesforce to MySQL Integration
This method requires you to manually build a custom code using various Salesforce APIs to connect Salesforce to MySQL database. It is important to understand these APIs before learning the required steps.
APIs Required to Connect Salesforce to MySQL Using Custom Code
Salesforce provides different types of APIs and utilities to query the data available in the form of Salesforce objects. These APIs help to interact with Salesforce data. An overview of these APIs is as follows:
Salesforce Rest APIs: Salesforce REST APIs provide a simple and convenient set of web services to interact with Salesforce objects. These APIs are recommended for implementing mobile and web applications that work with Salesforce objects.
Salesforce REST APIs: Salesforce SOAP APIs are to be used when the applications need a stateful API or have strict requirements on transactional reliability. It allows you to establish formal contracts of API behavior through the use of WSDL.
Salesforce BULK APIs: Salesforce BULK APIs are tailor-made for handling a large amount of data and have the ability to download Salesforce data as CSV files. It can handle data ranging from a few thousand records to millions of records. It works asynchronously and is batched. Background operation is also possible with Bulk APIs.
Salesforce Data Loader: Salesforce also provides a Data Loader utility with export functionality. Data Loader is capable of selecting required attributes from objects and then exporting them to a CSV file. It comes with some limitations based on the Salesforce subscription plan to which the user belongs. Internally, Data Loader works based on bulk APIs.
Steps to Connect Salesforce to MySQL
Use the following steps to achieve Salesforce to MySQL integration:
Step 1: Log in to Salesforce using the SOAP API and get the session id. For logging in first create an XML file named login.txt in the below format.
<?xml version="1.0" encoding="utf-8" ?>
<env:Envelope xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:env="http://schemas.xmlsoap.org/soap/envelope/">
<env:Body>
<n1:login xmlns:n1="urn:partner.soap.sforce.com">
<n1:username>your_username</n1:username>
<n1:password>your_password</n1:password>
</n1:login>
</env:Body>
</env:Envelope>
Step 2: Execute the below command to login
curl https://login.Salesforce.com/services/Soap/u/47.0 -H "Content-Type: text/xml;
charset=UTF-8" -H "SOAPAction: login" -d @login.txt
From the resultant XML, note the session id. This session id is to be used for all subsequent requests.
Step 3: Create a BULK API job. For doing this, create a text file in the folder named job.txt with the following content.
<?xml version="1.0" encoding="UTF-8"?>
<jobInfo xmlns="http://www.force.com/2009/06/asyncapi/dataload">
<operation>insert</operation>
<object>Contact</object>
<contentType>CSV</contentType>
</jobInfo>
Please note that the object attribute in the above XML should correspond to the object for which data is to be loaded. Here we are pulling data from the object called Contact.
Execute the below command after creating the job.txt
curl https://instance.Salesforce.com/services/async/47.0/job -H "X-SFDC-Session: sessionId" -H "Content-Type: application/xml;
charset=UTF-8" -d @job.txt
From the result, note the job id. This job-id will be used to form the URL for subsequent requests. Please note the URL will change according to the URL of the user’s Salesforce organization.
Step 4: Use CURL again to execute the SQL query and retrieve results.
curl https://instance_name—api.Salesforce.com/services/async/APIversion/job/jobid/batch
-H "X-SFDC-Session: sessionId" -H "Content-Type: text/csv;
SELECT name,desc from Contact
Step 5: Close the job. For doing this, create a file called close.txt with the below entry.
<?xml version="1.0" encoding="UTF-8"?>
<jobInfo xmlns="http://www.force.com/2009/06/asyncapi/dataload">
<state>Closed</state>
</jobInfo>
Execute the below command after creating the file to close the job.
curl https://instance.Salesforce.com/services/async/47.0/job/jobId -H "X-SFDC-Session: sessionId" -H "Content-Type: application/xml;
charset=UTF-8" -d @close_job.txt
Step 6: Retrieve the results id for accessing the URL for results. Execute the below command.
curl -H "X-SFDC-Session: sessionId" https://instance.Salesforce.com/services/async/47.0/job/jobId/batch/batchId/result
Step 7: Retrieve the actual results using the result ID fetched from the above step.
curl -H "X-SFDC-Session: sessionId" https://instance.Salesforce.com/services/async/47.0/job/jobId/batch/batchId/result/resultId
This will provide a CSV file with rows of data. Save the CSV file as contacts.csv.
Step 8: Load data to MySQL using the LOAD DATA INFILE command. Assuming the table is already created this can be done by executing the below command.
LOAD DATA INFILE'contacts.csv' INTO TABLE contacts
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY 'rn'
IGNORE 1 LINES;
Alternately, instead of using the bulk API manually, the Salesforce Data Loader utility can be used to export CSV files of objects. The caveat here is that usage of certain Data Loader functionalities is restricted based on the user’s subscription plan.
There is also a limit to the frequency in which data loader export operations can be performed or scheduled.
Limitations of Using Custom Code Method
As evident from the above steps, loading data from Salesforce to MySQL through the manual method is both a tedious and fragile process with multiple error-prone steps.
This works well when you have on-time or a batch need to bring data from Salesforce. In case you need data more frequently or in real-time, you would need to build additional processes to successfully achieve this.
Conclusion
In this blog, we discussed how to achieve Salesforce to MySQL Integration using 2 different approaches. Additionally, it has also highlighted the limitations and challenges of using the custom code method.
Visit our Website to Explore LIKE.TG
A more graceful method to achieve the same outcome would be to use a code-free Data Integration Platform likeLIKE.TG Data. LIKE.TG can mask all the ETL complexities and ensure that your data is securely moved to MySQL from Salesforce in just a few minutes and for free.
Want to give LIKE.TG a spin? Sign Up for a 14-day free trialand experience the feature-rich LIKE.TG suite firsthand. Check out our pricing to choose the right plan for you!
Let us know your thoughts on the 2 approaches to moving data from Salesforce to MySQL in the comments.

Aurora to Snowflake ETL: 5 Steps to Move Data Easily
Often businesses have a different Database to store transactions (Eg: Amazon Aurora) and another Data Warehouse (Eg. Snowflake) for the company’s Analytical needs. There are 2 prime reasons to move data from your transactional Database to a Warehouse (Eg: Aurora to Snowflake). Firstly, the transaction Database is optimized for fast writes and responses. Running Analytics queries on large data sets with many aggregations and Joins will slow down the Database. This might eventually take a toll on the customer experience. Secondly, Data Warehouses are built to handle scaling data sets and Analytical queries. Moreover, they can host the data from multiple data sources and aid in deeper analysis.
This post will introduce you to Aurora and Snowflake. It will also highlight the steps to move data from Aurora to Snowflake. In addition, you will explore some of the limitations associated with this method. You will be introduced to an easier alternative to solve these challenges. So, read along to gain insights and understand how to migrate data from Aurora to Snowflake.
Understanding Aurora and Snowflake
AWS RDS (Relational Database) is the initial Relation Database service from AWS which supports most of the open-source and proprietary databases. Open-source offerings of RDS like MySQL and PostgreSQL are much cost-effective compared to enterprise Database solutions like Oracle. But most of the time open-source solutions require a lot of performance tuning to get par with enterprise RDBMS in performance and other aspects like concurrent connections.
AWS introduced a new Relational Database service called Aurora which is compatible with MySQL and PostgreSQL to overcome the much-known weakness of those databases costing much lesser than enterprise Databases. No wonder many organizations are moving to Aurora as their primary transaction Database system.
On the other end, Snowflake might be the best cost-effective and fast Data Warehousing solution. It has dynamically scaling compute resources and storage is completely separated and billed. Snowflake can be run on different Cloud vendors including AWS. So data movement from Aurora to Snowflake can also be done with less cost. Read about Snowflake’s features here.
Methods to load data from Amazon Aurora to Snowflake
Here are two ways that can be used to approach Aurora to Snowflake ETL:
Method 1:Build Custom Scripts to move data from Aurora to Snowflake
Method 2:Implement a hassle-free, no-code Data Integration Platform like LIKE.TG Data –14 Day Free Trial(Official Snowflake ETL Partner) to move data from Aurora to Snowflake.
GET STARTED WITH LIKE.TG FOR FREE
This post will discuss Method 1 in detail to migrate data from Aurora to Snowflake. The blog will also highlight the limitations of this approach and the workarounds to solve them.
Move Data from Aurora to Snowflake using ETL Scripts
The steps to replicate data from Amazon Aurora to Snowflake are as follows:
1. Extract Data from Aurora Cluster to S3
SELECT INTO OUTFILE S3 statement can be used to query data from an Aurora MySQL cluster and save the result to S3. In this method, data reaches the client-side in a fast and efficient manner. To save data to S3 from an Aurora cluster proper permissions need to be set. For that –
Create a proper IAM policy to access S3 objects – Refer to AWS documentation here.
Create a new IAM role, and attach the IAM policy you created in the above step.
Set aurora_select_into_s3_role or aws_default_s3_role cluster parameter to the ARN of the new IAM role.
Associate the IAM role that you created with the Aurora cluster.
Configure the Aurora cluster to allow outbound connections to S3 – Read more on this here.
Other important points to be noted while exporting data to S3:
User Privilege – The user that issues the SELECT INTO OUTFILE S3 should have the privilege to do so.To grant access –
GRANT SELECT INTO S3 ON *.* TO 'user'@'domain'.
Note that this privilege is specific to Aurora. RDS doesn’t have such a privilege option.
Manifest File – You can set the MANIFEST ON option to create a manifest file which is in JSON format that lists the output files uploaded to the S3 path. Note that files will be listed in the same order in which they would be created.Eg:
{
"entries": [
{
"url":"s3-us-east-1://s3_bucket/file_prefix.part_00000"
},
{
"url":"s3-us-east-1://s3_bucket/file_prefix.part_00001"
},
{ "url":"s3-us-east-1://s3_bucket/file_prefix.part_00002"
}
]
}
Output Files – The output is stored as delimited text files. As of now compressed or encrypted files are not supported.
Overwrite Existing File – Set option OVERWRITE ON to delete if a file with exact name exists in S3.
The default file size is 6 GB. If the data selected by the statement is lesser then a single file is created. Otherwise, multiple files are created. No rows will be split across file boundaries.
If the data volume to be exported is larger than 25 GB, it is recommended to run multiple statements to export data. Each statement for a different portion of data.
No metadata like table schema will be uploaded to S3
As of now, there is no direct way to monitor the progress of data export. One simple method is set to manifest option on and the manifest file will be the last file created.Examples:
The below statement writes to S3 of located in a different region. Each field is terminated by a comma and each row is terminated by ‘n’.
SELECT * FROM students INTO OUTFILE S3 's3-us-west-2://aurora-out/sample_students_data'
FIELDS TERMINATED BY ','
LINES TERMINATED BY 'n';
Below is another example that writes to S3 of located in the same region. A manifest file will also be created.
SELECT * FROM students INTO OUTFILE S3 's3://aurora-out/sample_students_data'
FIELDS TERMINATED BY ','
LINES TERMINATED BY 'n'
MANIFEST ON;
2. Convert Data Types and Format them
There might be data transformations corresponding to business logic or organizational standards to be applied while transferring data from Aurora to Snowflake. Apart from those high-level mappings, some basic things to be considered generally are listed below:
All popular character sets including UTF-8, UTF-16 are supported by Snowflake. The full list can be found here.
Many Cloud-based and open source Big Data systems compromise on standard Relational Database constraints like Primary Key. But, note that Snowflake supports all SQL constraints like UNIQUE, PRIMARY KEY, FOREIGN KEY, NOT NULL constraints. This might be helpful when you load data.
Data types support in Snowflake is fairly rich including nested data structures like an array. Below is the list of Snowflake data types and corresponding MySQL Aurora types.
Snowflake is really flexible with the date or time format. If a custom format is used in your file that can be explicitly specified using the File Format Option while loading data to the table. The complete list of date and time formats can be found here.
3. Stage Data Files to the Snowflake Staging Area
Snowflake requires the data to be uploaded to a temporary location before loading to the table. This temporary location is an S3 location that Snowflake has access to. This process is called staging. The snowflake stage can be either internal or external.
(A) Internal Stage
In Snowflake, each user and table is automatically assigned to an internal stage for data files. It is also possible internal stages explicitly and can be named.
The stage assigned to the user is named as ‘@~’.
The stage assigned to a table will have the name of the table.
The default stages assigned to a user or table can’t be altered or dropped.
The default stages assigned to a user or table do not support setting file format options.
As mentioned above, internal stages can also be created explicitly by the user using SQL statements. While creating stages explicitly like this, many data loading options can be assigned to those stages like file format, date format, etc.
While interacting with Snowflake for data loading or creating tables, SnowSQL is a very handy CLI client available in Linux/Mac/Windows which can be used to run Snowflake commands. Read more about the tool and options here.
Below are some example commands to create a stage:
Create a named internal stage as shown below:
my_aurora_stage and assign some default options:
create or replace stage my_aurora_stage
copy_options = (on_error='skip_file')
file_format = (type = 'CSV' field_delimiter = '|' skip_header = 1);
PUT is the command used to stage files to an internal Snowflake stage. The syntax of the PUT command is :
PUT file://path_to_file/filename internal_stage_name
Eg:
Upload a file named students_data.csv in the /tmp/aurora_data/data/ directory to an internal stage named aurora_stage.
put file:////tmp/aurora_data/data/students_data.csv @aurora_stage;
Snowflake provides many options which can be used to improve the performance of data load like the number of parallelisms while uploading the file, automatic compression, etc. More information and the complete list of options are listed here.
(B) External Stage
Just like the internal stage Snowflake supports Amazon S3 and Microsoft Azure as an external staging location. If data is already uploaded to an external stage that can be accessed from Snowflake, that data can be loaded directly to the Snowflake table. No need to move the data to an internal stage.
To create an external stage on S3, IAM credentials with proper access permissions need to be provided. In case the data is encrypted, encryption keys should be provided.
create or replace stage aurora_ext_stage url='s3://snowflake_aurora/data/load/files/'
credentials=(aws_key_id='13311a23344rrb3c' aws_secret_key='abddfgrrcd4kx5y6z');
encryption=(master_key = 'eSxX0jzsdsdYfIjkahsdkjamNNNaaaDwOaO8=');
Data can be uploaded to the external stage with respective Cloud services. Data from Amazon Aurora will be exported to S3 and that location itself can be used as an external staging location which helps to minimize data movement.
4. Import Staged Files to Snowflake Table
Now data is present in an external or internal stage and has to be loaded to a Snowflake table. The command used to do this is COPY INTO. To execute the COPY INTO command compute resources in the form of Snowflake virtual warehouses are required and will be billed as per consumption.
Eg:
To load from a named internal stage:
copy into aurora_table
from @aurora_stage;
To load data from the external stage. Only a single file is specified.
copy into my_external_stage_table
from @aurora_ext_stage/tutorials/dataloading/students_ext.csv;
You can even copy directly from an external location:
copy into aurora_table
from s3://mybucket/aurora_snow/data/files
credentials=(aws_key_id='$AWS_ACCESS_KEY_ID' aws_secret_key='$AWS_SECRET_ACCESS_KEY')
encryption=(master_key = 'eSxX009jhh76jkIuLPH5r4BD09wOaO8=')
file_format = (format_name = csv_format);
Files can be specified using patterns:
copy into aurora_pattern_table
from @aurora_stage
file_format = (type = 'TSV')
pattern='.*/.*/.*[.]csv[.]gz';
Some commonly used options for CSV file loading using the COPY command
COMPRESSION to specify compression algorithm used for the files
RECORD_DELIMITER to indicate lines separator character
FIELD_DELIMITER is the character separating fields in the file
SKIP_HEADER is the number of header lines skipped
DATE_FORMAT is the date format specifier
TIME_FORMAT is the time format specifier
There are many other options. For the full list click here.
5. Update Snowflake Table
So far the blog talks about how to extract data from Aurora and simply insert it into a Snowflake table. Next, let’s look deeper into how to handle incremental data upload to the Snowflake table.
Snowflake’s architecture is unique. It is not based on any current/existing big data framework. Snowflake does not have any limitations for row-level updates. This makes delta data uploading to a Snowflake table much easier compared to systems like Hive. The way forward is to load incrementally extracted data to an intermediate table. Next, as per the data in the intermediate table, modify the records in the final table.
3 common methods that are used to modify the final table once data is loaded into a landing table ( intermediate table) are mentioned below.
1. Update the rows in the target table. Next, insert new rows from the intermediate or landing table which are not in the final table.
UPDATE aurora_target_table t SET t.value = s.value
FROM landing_delta_table in WHERE t.id = in.id;
INSERT INTO auroa_target_table (id, value)
SELECT id, value FROM landing_delta_table WHERE NOT id IN (SELECT id FROM aurora_target_table);
2. Delete all records from the target table which are in the landing table. Then insert all rows from the landing table to the final table.
DELETE .aurora_target_table f
WHERE f.id IN (SELECT id from landing_table);
INSERT aurora_target_table (id, value)
SELECT id, value FROM landing_table;
3. MERGE statement – Inserts and updates combined in a single MERGE statement and it is used to apply changes in the landing table to the target table with one SQL statement.
MERGE into aurora_target_table t1 using landing_delta_table t2 on t1.id = t2.id
WHEN matched then update set value = t2.value
WHEN not matched then INSERT (id, value) values (t2.id, t2.value);
Limitations of Writing CustomETL Code to Move Data from Aurora to Snowflake
While the approach may look very straightforward to migrate data from Aurora to Snowflake, it does come with limitations. Some of these are listed below:
You would have to invest precious engineering resources to hand-code the pipeline. This will increase the time for the data to be available in Snowflake.
You will have to invest in engineering resources to constantly monitor and maintain the infrastructure. Code Breaks, Schema Changes at the source, Destination Unavailability – these issues will crop up more often than you would account for while starting the ETL project.
The above approach fails if you need data to be streamed in real-time from Aurora to Snowflake. You would need to add additional steps, set up cron jobs to achieve this.
So, to overcome these limitations and to load your data seamlessly from Amazon Aurora to Snowflake you can use a third-party tool like LIKE.TG .
EASY WAY TO MOVE DATA FROM AURORA TO SNOWFLAKE
On the other hand, a Data Pipeline Platform such asLIKE.TG , an official Snowflake ETL partner,can help you bring data from Aurora to Snowflake in no time. Zero Code, Zero Setup Time, Zero Data Loss. Here are the simple steps to loaddata from Aurora to Snowflake using LIKE.TG :
Authenticate and Connect to your Aurora DB.
Select the replication mode: (a) Full Dump and Load (b) Incremental load for append-only data (c) Change Data Capture
Configure the Snowflake Data Warehouse for data load.
SIGN UP HERE FOR A 14-DAY FREE TRIAL!
For a next-generation digital organization, there should be a seamless data movement between Transactional and Analytical systems. Using an intuitive and reliable platform like LIKE.TG to migrate your data from Aurora to Snowflake ensures that accurate and consistent data is available in Snowflake in real-time.
Conclusion
In this article, you gained a basic understanding of AWS Aurora and Snowflake. Moreover, you understood the steps to migrate your data from Aurora to Snowflake using Custom ETL scripts. In addition, you explored the limitations of this method. Hence, you were introduced to an easier alternative, LIKE.TG to move your data from Amazon Aurora to Snowflake seamlessly.
VISIT OUR WEBSITE TO EXPLORE LIKE.TG
LIKE.TG Data is a No-Code Data Pipeline that offers a faster way to move data from 150+ Data Sources including 50+ Free Sources, into your Data Warehouse like Amazon Redshift to be visualized in a BI tool. LIKE.TG is fully automated and hence does not require you to code. You can easily load your data from Aurora to Snowflake in a hassle-free manner.
Want to take LIKE.TG for a spin? Check out our transparent pricing to make an informed decision.
SIGN UP and experience a hassle-free data replication from Aurora to Snowflake.
Share your experience of migrating data from Aurora to Snowflake in the comments section below!

How To Set up SQL Server to Snowflake in 4 Easy Methods
Snowflake is great if you have big data needs. It offers scalable computing and limitless size in a traditional SQL and Data Warehouse setting. If you have a relatively small dataset or low concurrency/load then you won’t see the benefits of Snowflake.Simply put, Snowflake has a friendly UI, and unlimited storage capacity, along with the control, security, and performance you’d expect for a Data Warehouse, something SQL Server is not. Snowflake’s unique Cloud Architecture enables unlimited scale and concurrency without resource contention, the ‘Holy Grail’ of Data Warehousing.
One of the biggest challenges of migrating data from SQL server to Snowflake is choosing from all the different options available. This blog post covers the detailed steps of 4 methods that you need to follow for SQL Server to Snowflake migration. Read along and decide, which method suits you the best!
What is MS SQL Server?
Microsoft SQL Server (MS SQL Server) is a relational database management system (RDBMS) developed by Microsoft. It is used to store and retrieve data as requested by other software applications, which may run either on the same computer or on another computer across a network. MS SQL Server is designed to handle a wide range of data management tasks and supports various transaction processing, business intelligence, and analytics applications.
Key Features of SQL Server:
Scalability: Supports huge databases and multiple concurrent users.
High Availability: Features include Always On and Failover clustering.
Security: Tight security through solid encryption, auditing, row-level security.
Performance: High-Speed in-memory OLTP and Columnstore indexes
Integration: Integrates very well with other Microsoft services and Third-Party Tools
Data Tools: In-Depth tools for ETL, reporting, data analysis
Cloud Integration: Comparatively much easier to integrate with Azure services
Management: SQL Server Management Studio for the management of Databases
Backup and Recovery: Automated Backups, Point-in-Time Restore.
TSQL: Robust Transact-SQL in complex queries and stored procedures.
What is Snowflake?
Snowflake is a cloud-based data warehousing platform that is designed to handle large-scale data storage, processing, and analytics. It stands out due to its architecture, which separates compute, storage, and services, offering flexibility, scalability, and performance improvements over traditional data warehouses.
Key Features of Snowflake:
Scalability: Seamless scaling of storage and compute independently.
Performance: Fast query performance with automatic optimization.
Data Sharing: Secure and easy data sharing across organizations.
Multi-Cloud: Operates on AWS, Azure, and Google Cloud.
Security: Comprehensive security features including encryption and role-based access.
Zero Maintenance: Fully managed with automatic updates and maintenance.
Data Integration: Supports diverse data formats and ETL tools.
Load your data from MS SQL Server to SnowflakeGet a DemoTry itLoad your data from Salesforce to SnowflakeGet a DemoTry itLoad your data from MongoDB to SnowflakeGet a DemoTry it
Methods to Connect SQL Server to Snowflake
The following 4 methods can be used to transfer data from Microsoft SQL server to Snowflake easily:
Method 1: Using SnowSQL to connect SQL server to Snowflake
Method 2: Using Custom ETL Scripts to connect SQL Server to Snowflake
Method 3: Using LIKE.TG Data to connect Microsoft SQL Server to Snowflake
Method 4: SQL Server to Snowflake Using Snowpipe
Method 1: Using SnowSQL to Connect Microsoft SQL Server to Snowflake
To migrate data from Microsoft SQL Server to Snowflake, you must perform the following steps:
Step 1: Export data from SQL server using SQL Server Management Studio
Step 2: Upload the CSV file to an Amazon S3 Bucket using the web console
Step 3: Upload data to Snowflake From S3
Step 1: Export Data from SQL Server Using SQL Server Management Studio
SQL Server Management Studio is a data management and administration software application that launched with SQL Server.
You will use it to extract data from a SQL database and export it to CSV format. The steps to achieve this are:
Install SQL Server Management Studio if you don’t have it on your local machine.
Launch the SQL Server Management Studio and connect to your SQL Server.
From the Object Explorer window, select the database you want to export and right-click on the context menu in the Tasks sub-menu and choose the Export data option to export table data in CSV.
The SQL Server Import and Export Wizard welcome window will pop up. At this point, you need to select the Data source you want to copy from the drop-down menu.
After that, you need to select SQL Server Native Client 11.0 as the data source.
Select an SQL Server instance from the drop-down input box.
Under Authentication, select “Use Windows Authentication”.
Just below that, you get a Database drop-down box, and from here you select the database from which data will be copied.
Once you’re done filling out all the inputs, click on the Next button.
The next window is the Choose a Destination window. Under the destination drop-down box, select the Flat File Destination for copying data from SQL Server to CSV.
Under File name, select the CSV file that you want to write to and click on the Next button.
In the next screen, select Copy data from one or more tables or views and click Next to proceed.
A “Configure Flat File Destination” screen will appear, and here you are going to select the table from the Source table or view. This action will export the data to the CSV file. Click Next to continue.
You don’t want to change anything on the Save and Run Package window so just click Next.
The next window is the Complete Wizard window which shows a list of choices that you have selected during the exporting process. Counter-check everything and if everything checks out, click the Finish button to begin exporting your SQL database to CSV.
The final window shows you whether the exporting process was successful or not. If the exporting process is finished successfully, you will see a similar output to what’s shown below.
Step 2: Upload the CSV File to an Amazon S3 Bucket Using the Web Console
After completing the exporting process to your local machine, the next step in the data transfer process from SQL Server to Snowflake is to transfer the CSV file to Amazon S3.
Steps to upload a CSV file to Amazon S3:
Start by creating a storage bucket.
Go to the AWS S3 Console
Click the Create Bucket button and enter a unique name for your bucket on the form.
Choose the AWS Region where you’d like to store your data.
Create a new S3 bucket.
Create the directory that will hold your CSV file.
In the Buckets pane, click on the name of the bucket that you created.
Click on the Actions button, and select the Create Folder option.
Enter a unique name for your new folder and click Create.
Upload the CSV file to your S3 bucket.
Select the folder you’ve just created in the previous step.
Select Files wizard and then click on the Add Files button in the upload section.
Next, a file selection dialog box will open. Here you will select the CSV file you exported earlier and then click Open.
Click on the Start Upload button and you are done!
Move your SQL Server Data to Snowflake using LIKE.TG
Start for Free Now
Step 3: Upload Data to Snowflake From S3
Since you already have an Amazon Web Services (AWS) account and you are storing your data files in an S3 bucket, you can leverage your existing bucket and folder paths for bulk loading into Snowflake.
To allow Snowflake to read data from and write data to an Amazon S3 bucket, you first need to configure a storage integration object to delegate authentication responsibility for external cloud storage to a Snowflake identity and access management (IAM) entity.
Step 3.1: Define Read-Write Access Permissions for the AWS S3 Bucket
Allow the following actions:
“s3:PutObject”
“s3:GetObject”
“s3:GetObjectVersion”
“s3:DeleteObject”
“s3:DeleteObjectVersion”
“s3:ListBucket”
The following sample policy grants read-write access to objects in your S3 bucket.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowListingOfUserFolder",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::bucket_name"
]
},
{
"Sid": "HomeDirObjectAccess",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObjectVersion",
"s3:DeleteObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::bucket_name/*"
}
]
}
For a detailed explanation of how to grant access to your S3 bucket, check out this link.
Step 3.2: Create an AWS IAM Role and record your IAM Role ARN value located on the role summary page because we are going to need it later on.
Step 3.3: Create a cloud storage integration using the STORAGE INTEGRATION command.
CREATE STORAGE INTEGRATION <integration_name>
TYPE = EXTERNAL_STAGE
STORAGE_PROVIDER = S3
ENABLED = TRUE
STORAGE_AWS_ROLE_ARN = '<iam_role>'
STORAGE_ALLOWED_LOCATIONS = ('s3://<bucket>/<path>/', 's3://<bucket>/<path>/')
[ STORAGE_BLOCKED_LOCATIONS = ('s3://<bucket>/<path>/', 's3://<bucket>/<path>/') ]
Where:
<integration_name> is the name of the new integration.
<iam_role is> the Amazon Resource Name (ARN) of the role you just created.
<bucket> is the name of an S3 bucket that stores your data files.
<path> is an optional path that can be used to provide granular control over objects in the bucket.
Step 3.4: Recover the AWS IAM User for your Snowflake Account
Execute the DESCRIBE INTEGRATION command to retrieve the ARN for the AWS IAM user that was created automatically for your Snowflake account:DESC INTEGRATION <integration_name>;
Record the following values:
Step 3.5: Grant the IAM User Permissions to Access Bucket Objects
Log into the AWS Management Console and from the console dashboard, select IAM.
Navigate to the left-hand navigation pane and select Roles and choose your IAM Role.
Select Trust Relationships followed by Edit Trust Relationship.
Modify the policy document with the IAM_USER_ARNand STORAGE_AWS_EXTERNAL_ID output values you recorded in the previous step.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "<IAM_USER_ARN>"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": "<STORAGE_AWS_EXTERNAL_ID>"
}
}
}
]
}
Click the Update Trust Policy button to save the changes.
Step 3.6: Create an External Stage that references the storage integration you created
grant create stage on schema public to role <IAM_ROLE>;
grant usage on integration s3_int to role <IAM_ROLE>;
use schema mydb.public;
create stage my_s3_stage
storage_integration = s3_int
url = 's3://bucket1/path1'
file_format = my_csv_format;
Step 3.7: Execute COPY INTO <table> SQL command to load data from your staged files into the target table using the Snowflake client, SnowSQL.
Seeing that we have already configured an AWS IAM role with the required policies and permissions to access your external S3 bucket, we have already created an S3 stage. Now that we have a stage built in Snowflake pulling this data into your tables will be extremely simple.
copy into mytable
from s3://mybucket credentials=(aws_key_id='$AWS_ACCESS_KEY_ID' aws_secret_key='$AWS_SECRET_ACCESS_KEY')
file_format = (type = csv field_delimiter = '|' skip_header = 1);
This SQL command loads data from all files in the S3 bucket to your Snowflake Warehouse.
SQL Server to Snowflake: Limitations and Challenges of Using Custom Code Method
The above method of connecting SQL Server to Snowflake comes along with the following limitations:
This method is only intended for files that do not exceed 160GB. Anything above that will require you to use the Amazon S3 REST API.
This method doesn’t support real-time data streaming from SQL Server into your Snowflake DW.
If your organization has a use case for Change Data Capture (CDC), then you could create a data pipeline using Snowpipe.
Also, although this is one of the most popular methods of connecting SQL Server to Snowflake, there are a lot of steps that you need to get right to achieve a seamless migration. Some of you might even go as far as to consider this approach to be cumbersome and error-prone.
Method 2: Using Custom ETL Scripts
Custom ETL scripts are programs that extract, transform, and load data from SQL Server to Snowflake. They require coding skills and knowledge of both databases.
To use custom ETL scripts, you need to:
1. Install the Snowflake ODBC driver or a client library for your language (e.g., Python, Java, etc.).
2. Get the connection details for Snowflake (e.g., account name, username, password, warehouse, database, schema, etc.).
3. Choose a language and set up the libraries to interact with SQL Server and Snowflake.
4. Write a SQL query to extract the data you want from SQL Server. Use this query in your script to pull the data.
Drawbacks of Utilizing ETL Scripts
While employing custom ETL scripts to transfer data from SQL Server to Snowflake offers advantages, it also presents potential drawbacks:
Complexity and Maintenance Burden: Custom scripts demand more resources for development, testing, and upkeep compared to user-friendly ETL tools, particularly as data sources or requirements evolve.
Limited Scalability: Custom scripts may struggle to efficiently handle large data volumes or intricate transformations, potentially resulting in performance challenges unlike specialized ETL tools.
Security Risks: Managing credentials and sensitive data within scripts requires meticulous attention to security. Storing passwords directly within scripts can pose significant security vulnerabilities if not adequately safeguarded.
Minimal Monitoring and Logging Capabilities: Custom scripts may lack advanced monitoring and logging features, necessitating additional development effort to establish comprehensive tracking mechanisms.
Extended Development Duration: Developing custom scripts often takes longer compared to configuring ETL processes within visual tools.
Method 3: Using LIKE.TG Data to Connect SQL Server to Snowflake
LIKE.TG is the only real-time ELT No-code Data Pipeline platform that cost-effectively automates flexible data pipelines to your needs. With integration with 150+ Data Sources (40+ free sources), we help you not only export data from sources load data to the destinations but also transform enrich your data, make it analysis-ready.
The following steps are required to connect Microsoft SQL Server to Snowflake using LIKE.TG ’s Data Pipeline:
Step 1: Connect to your Microsoft SQL Server source.
ClickPIPELINESin theNavigation Bar.
Click+ CREATEin thePipelines List View.
Select SQL Server as your source.
In theConfigure yourSQL ServerSourcepage, specify the following:
You can read more about using SQL server as a source connector for LIKE.TG here.
Step 2: Configure your Snowflake Data Warehouse as Destination
ClickDESTINATIONSin theNavigation Bar.
Click+ CREATEin theDestinations List View.
In theAdd Destinationpage, selectSnowflakeas the Destination type.
In theConfigure yourSnowflakeWarehousepage, specify the following:
This is how simple it can be to load data from SQL Server to Snowflake using LIKE.TG .
Method 4: SQL Server to Snowflake Using Snowpipe
Snowpipe is a feature of Snowflake that allows you to load data from external sources into Snowflake tables automatically and continuously. Here are the steps involved in this method:
1. Create an external stage in Snowflake that points to an S3 bucket where you will store the CSV file.
2. Create an external stage in Snowflake that points to an S3 bucket where you will store the CSV file.
3. Create a pipe in Snowflake that copies data from the external stage to the table. Enable auto-ingest and specify the file format as CSV.
4. Enable Snowpipe with the below command
ALTER ACCOUNT SET PIPE_EXECUTION_PAUSED = FALSE;
5. Install the Snowpipe JDBC driver on your local machine and create a batch file to export data from SQL Server to CSV File.
6. Schedule the batch file to run regularly using a tool like Windows Task Scheduler or Cron. Check out this documentation for more details.
Drawbacks of Snowpipe Method
Here are some key limitations of using Snowpipe for data migration from SQL Server to Snowflake:
File Size Restrictions: Snowflake imposes a per-file size limit for direct ingestion (around 160GB). Files exceeding this necessitate additional steps like splitting them or using the S3 REST API, adding complexity.
Real-Time/CDC Challenges: Snowpipe is ideal for micro-batches and near real-time ingestion. But, it isn’t built for true real-time continuous data capture (CDC) of every single change happening in your SQL Server.
Error Handling: Error handling for failed file loads through Snowpipe can become a bit nuanced. You need to configure options like ON_ERROR = CONTINUE in your COPY INTO statements to prevent individual file failures from stopping the entire load process.
Transformation Limitations: Snowpipe primarily handles loading data into Snowflake. For complex transformations during the migration process, you may need a separate ETL/ELT tool to work with the Snowpipe-loaded data within Snowflake.
Why migrate data from MS SQL Server to Snowflake?
Enhanced Scalability and Elasticity: MSSQL Server, while scalable, often requires manual infrastructure provisioning for scaling compute resources. Snowflake’s cloud-based architecture offers elastic scaling, allowing you to easily adjust compute power up or down based on workload demands. You only pay for the resources you use, leading to potentially significant cost savings.
Reduced Operational Burden: Managing and maintaining on-premises infrastructure associated with MSSQL Server can be resource-intensive. Snowflake handles all infrastructure management, freeing up your IT team to focus on core data initiatives.
Performance and Concurrency: Snowflake’s architecture is designed to handle high concurrency and provide fast query performance, making it suitable for demanding analytical workloads and large-scale data processing.
Additional Resources on SQL Server to Snowflake
Explore more about Loading Data to Snowflake
Conclusion
The article introduced you to how to migrate data from SQL server to Snowflake. It also provided a step-by-step guide of 4 methods using which you can connect your Microsoft SQL Server to Snowflake easily.
The article also talked about the limitations and benefits associated with these methods. The manual method using SnowSQL works fine when it comes to transferring data from Microsoft SQL Server to Snowflake, but there are still numerous limitations to it.
FAQ on SQL Server to Snowflake
Can you connect SQL Server to Snowflake?
Connecting the SQL server to Snowflake is a straightforward process. You can do this using ODBC drivers or through automated platforms like LIKE.TG , making the task more manageable.
How to migrate data from SQL to Snowflake?
To migrate your data from SQL to Snowflake using the following methods:Method 1: Using SnowSQL to connect the SQL server to SnowflakeMethod 2: Using Custom ETL Scripts to connect SQL Server to SnowflakeMethod 3: Using LIKE.TG Data to connect Microsoft SQL Server to SnowflakeMethod 4: SQL Server to Snowflake Using Snowpipe
Why move from SQL Server to Snowflake?
We need to move from SQL Server to Snowflake because it provides:1. Enhanced scalability and elasticity.2. Reduced operational burden.3. High concurrency and fast query performance.
Can SQL be used for snowflakes?
Yes, snowflake provides a variant called Snowflake SQL which is ANSI SQL-compliant.
What are your thoughts about the different approaches to moving data from Microsoft SQL Server to Snowflake? Let us know in the comments.