官方社群在线客服官方频道防骗查询货币工具
 Google Analytics to Snowflake: 2 Easy Methods
Google Analytics to Snowflake: 2 Easy Methods
Google Analytics is the most popular web analytics service on the market, used to gather crucial information on website events: web traffic, purchases, signups, and other aspects of browser/customer behavior. However, the vast amount of data that Analytics provides makes it necessary for many users to search for ways to more deeply analyze the information found within the platform. Enter Snowflake, a platform designed from the ground up to be a cloud-based data warehouse. You can read more about Snowflake here. For many users of Analytics, Snowflake is the ideal solution for their data analysis needs, and in this article, we will walk you through the process of moving your data from Google Analytics to Snowflake.Introduction to Google Analytics Google Analytics (GA) is a Web Analytics service that offers Statistics and basic Analytical tools for your Search Engine Optimization (SEO) and Marketing needs. It’s free and part of Google’s Marketing Platform, so anyone with a Google account may take advantage of it. Google Analytics is used to monitor website performance and gather visitor data. It can help organizations identify the most popular sources of user traffic, measure the success of their Marketing Campaigns and initiatives, track objective completion, discover patterns and trends in user engagement, and obtain other visitor information, such as demographics. To optimize Marketing Campaigns, increase website traffic, and better retain visitors, small and medium-sized retail websites commonly leverage Google Analytics. Here are the key features of Google Analytics: Conversion Tracking: Conversion points (such as a contact form submission, e-commerce sale, or phone call) can be tracked in Google Analytics once they have been recognized on your website. You’ll be able to observe when someone converted, the traffic source that referred them, and much more.Third-Party Referrals: A list of third-party websites that sent you traffic will be available. That way you’ll know which sites are worth spending more time on, as well as if any new sites have started linking to yours.Custom Dashboards: You can create semi-custom Dashboards for your analytics with Google Analytics. You can add Web Traffic, Conversions, and Keyword Referrals to your dashboard if they’re essential to you. To share your reports, you can export your dashboard into PDF or CSV format.Traffic Reporting: Google Analytics is essentially a traffic reporter. How many people visit your site each day will be revealed by the service’s statistics. You may also keep track of patterns over time, which can help you make better decisions about online Marketing. Solve your data replication problems with LIKE.TG ’s reliable, no-code, automated pipelines with 150+ connectors.Get your free trial right away! Introduction to Snowflake Snowflake is a cloud data warehouse that came out in 2015. It is primarily available on AWS and Azure. Snowflake is similar to BigQuery in that it stores data separately from where it does its compute. It stores the actual data of your tables in S3 and then it can provision any number of compute nodes to process that data. In contrast, Snowflake offers instant access to unlimited resources (compute and storage) on-demand. Snowflake Benefits: Snowflake is specifically optimized for analytics workloads. It’s therefore ideal for businesses dealing with very complex data sets.Snowflake offers better performance both in terms of storage capacity and query performance.Snowflake also offers better security compared to an on-prem data warehouse. This is because cloud data warehouses are required to meet stringent security requirements.Migrating your data to the cloud is also cost-effective since there is no huge initial outlay and you don’t have to maintain physical infrastructure. Methods to Move data from Google Analytics to Snowflake Before we get started, there are essentially two ways to move your data from Google Analytics to Snowflake: Method 1: Using Custom ETL Scripts to Move Data from Google Analytics to Snowflake This would need you to understand the Google Analytics API, build a code to bring data from it, clean and prepare the data and finally, load it to Snowflake. This can be a time-intensive task and (let’s face it) not the best use of your time as a developer. Method 2: Using LIKE.TG Data to Move Data from Google Analytics to Snowflake LIKE.TG , a Data Integration Platform gets the same results in a fraction of time with none of the hassles. LIKE.TG can help you bring Google Analytics data to Snowflake in real-time for free without having to write a single line of code. Get Started with LIKE.TG for Free LIKE.TG ’s pre-built integration with Google Analytics (among 100+ Sources) will take full charge of the data transfer process, allowing you to focus on key business activities. This article provides an overview of both the above approaches. This will allow you to assess the pros and cons of both and choose the route that suits your use case best Understanding the Methods to Connect Google Analytics to Snowflake Here are the methods you can use to establish a connection from Google Analytics to Snowflake: Method 1: Using Custom ETL Scripts to Move Data from Google Analytics to SnowflakeMethod 2: Using LIKE.TG Data to Move Data from Google Analytics to Snowflake Method 1: Using Custom ETL Scripts to Move Data from Google Analytics to Snowflake Here are the steps you can use to set up a connection from Google Analytics to Snowflake using Custom ETL Scripts: Step 1: Accessing Data on Google AnalyticsStep 2: Transforming Google Analytics DataStep 3: Transferring Data from Google Analytics to SnowflakeStep 4: Maintaining Data on Snowflake Step 1: Accessing Data on Google Analytics The first step in moving your data is to access it, which can be done using the Google Analytics Reporting API. Using this API, you can create reports and dashboards, both for use in your Analytics account as well as in other applications, such as Snowflake. However, when using the Reporting API, it is important to remember that only those with a paid Analytics 360 subscription will be able to utilize all the features of the API, such as viewing event-level data, while users of the free version of Analytics can only create reports using less targeted aggregate data. Step 2: Transforming Google Analytics Data Before transferring data to Snowflake, the user must define a complete and well-ordered schema for all included data. In some cases, such as with JSON or XML data types, data does not need a schema in order to be transferred directly to Snowflake. However, many data types cannot be moved quite so readily, and if you are dealing with (for example) Microsoft SQL server data, more work is required on the part of the user to ensure that the data is compatible with Snowflake. Google Analytics reports are conveniently expressed in the manner of a spreadsheet, which maps well to the similarly tabular data structures of Snowflake. On the other hand, it is important to remember that these reports are samples of primary data, and as such, may contain different values during separate report instances, even over the same time period sampled. Because Analytics reports and Snowflake data profiles are so similarly structured, a common technique is to map each key embedded in a Report API endpoint response to a mirrored column on the Snowflake data table, thereby ensuring a proper conversion of necessary data types. Because data conversion is not automatic, it is incumbent on the user to revise data tables to keep up with any changes in primary data types. Step 3: Transferring Data from Google Analytics to Snowflake There are three primary ways of transferring your data to Snowflake: COPY INTO – The COPY INTO command is perhaps the most common technique for data transferral, whereby data files (stored either locally or in a storage solution like Amazon S3 buckets) are copied into a data warehouse.PUT – The PUT command may also be used, which allows the user to stage files prior to the execution of the COPY INTO command.Upload – Data files can be uploaded into a service such as the previously mentioned Amazon S3, allowing for direct access of these files by Snowflake. Step 4: Maintaining Data on Snowflake Maintaining an accurate database on Snowflake is a never-ending battle; with every update to Google Analytics, older data on Snowflake must be analyzed and updated to ensure the integrity of the overarching data tables. This task is made somewhat easier by creating UPDATE statements in Snowflake, but you must also take care to identify and delete any duplicate records that appear in the database. Overall, maintenance of your newly-created Snowflake database can be a time-consuming project, which is all the more reason to look for time-saving solutions such as LIKE.TG . Limitations of Using Custom ETL Scripts to Connect Google Analytics to Snowflake Although there are other methods of integrating data from Google Analytics to Snowflake, those not using LIKE.TG must be prepared to deal with a number of limitations: Heavy Engineering Bandwidth: Building, testing, deploying, and maintaining the infrastructure necessary for proper data transfer requires a great deal of effort on the end user’s part.Not Automatic: Each time a change is made in Google Analytics, time must be taken to manually alter the code to ensure data integrity.Not Real-time: The steps as set out in this article must be performed every single time data is moved from Analytics to Snowflake. For most users, who will be moving data on a regular basis, following these steps every time will be a cumbersome, time-consuming ordeal.Possibility of Irretrievable Data Loss: If at any point during this process an error occurs say, something changes in Google Analytics API or on Snowflake, serious data corruption and loss can result. Method 2: Using LIKE.TG Data to Move Data from Google Analytics to Snowflake LIKE.TG is fully managed and completely automates the process of not only loading data from your desired source but also enriching the data and transforming it into an analysis-ready form without having to write a single line of code. Its fault-tolerant architecture ensures that the data is handled in a secure, consistent manner with zero data loss. Sign up here for a 14-Day Free Trial! LIKE.TG takes care of all your data preprocessing to set up a connection from Google Analytics to Snowflake and lets you focus on key business activities and draw a much powerful insight on how to generate more leads, retain customers, and take your business to new heights of profitability. It provides a consistent & reliable solution to manage data in real-time and always have analysis-ready data in your desired destination. LIKE.TG being an official Snowflake partner, can connect Google Analytics to Snowflake in 2 simple steps: Step 1: Connect LIKE.TG with Google Analytics 4 and all your data sources by simply logging in with your credentials. Step 2: Configure the Snowflake destination by providing the details like Destination Name, Account Name, Account Region, Database User, Database Password, Database Schema, and Database Name. LIKE.TG will now take care of all the heavy-weight lifting to move data from Google Analytics to Snowflake. Here are some of the benefits of LIKE.TG : Reduced Time to Implementation: With a few clicks of a mouse, users can swiftly move their data from source to destination. This will drastically reduce time to insight and help your business make key decisions faster. End to End Management: The burden of overseeing the inessential minutiae of data migration is removed from the user, freeing them to make more efficient use of their time.A Robust System for Alerts and Notifications: LIKE.TG offers users a wide array of tools to ensure that changes and errors are detected and that the user is notified as to their presence.Complete, Consistent Data Transfer: Whereas some data migration solutions can lead to the loss of data as errors appear, LIKE.TG uses a proprietary staging mechanism to quarantine problematic data fields so that the user can fix errors on a case-to-case basis and move this data.Comprehensive Scalability: With LIKE.TG , it is no problem to incorporate new data sets, regardless of file size. In addition to Google Analytics, LIKE.TG is also able to interface with a number of other analytics, marketing, and cloud applications; LIKE.TG aims to be the one-source solution for all your data transfer needs.24/7 Support: LIKE.TG provides a team of product experts, ready to assist 24 hours a day, 7 days a week. Simplify your Data Analysis with LIKE.TG today! Conclusion For users who seek a more in-depth understanding of their web traffic, moving data from Google Analytics to their Snowflake data warehouse becomes an important feat. However, sifting through this can be an arduous and time-intensive process, a process that a tool like LIKE.TG can streamline immensely, with no effort needed from the user’s end. Furthermore, LIKE.TG is compatible with a 100+ data sources, including 40+ Free Sources like Google Analytics allowing the user to interface with databases, cloud storage solutions, and more. Visit Our Website To Explore LIKE.TG Still not sure that LIKE.TG is right for you? Sign Up to try our risk-free, expense-free 14-day trial, and experience for yourself the ease and efficiency provided by the LIKE.TG Data Integration Platform. You can also have a look at the unbeatable pricing that will help you choose the right plan for your business needs.
 Google BigQuery Architecture: The Comprehensive Guide
Google BigQuery Architecture: The Comprehensive Guide
Google BigQuery is a fully managed data warehouse tool. It allows scalable analysis over a petabyte of data, querying using ANSI SQL, integration with various applications, etc. To access all these features conveniently, you need to understand BigQuery architecture, maintenance, pricing, and security. This guide decodes the most important components of Google BigQuery: BigQuery Architecture, Maintenance, Performance, Pricing, and Security. What Is Google BigQuery? Google BigQuery is a Cloud Datawarehouse run by Google. It is capable of analyzing terabytes of data in seconds. If you know how to write SQL Queries, you already know how to query it. In fact, there are plenty of interesting public data sets shared in BigQuery, ready to be queried by you. You can access BigQuery by using the GCP console or the classic web UI, by using a command-line tool, or by making calls to BigQuery Rest API using a variety of Client Libraries such as Java, and .Net, or Python. There are also a variety of third-party tools that you can use to interact with BigQuery, such as visualizing the data or loading the data. What are the Key Features of Google BigQuery? Why did Google release BigQuery and why would you use it instead of a more established data warehouse solution? Ease of Implementation: Building your own is expensive, time-consuming, and difficult to scale. With BigQuery, you need to load data first and pay only for what you use. Speed: Process billions of rows in seconds and handle the real-time analysis of Streaming data. What is the Google BigQuery Architecture? BigQuery Architecture is based on Dremel Technology. Dremel is a tool used in Google for about 10 years. Dremel: BigQuery Architecture dynamically apportions slots to queries on an as-needed basis, maintaining fairness amongst multiple users who are all querying at once. A single user can get thousands of slots to run their queries. It takes more than just a lot of hardware to make your queries run fast. BigQuery requests are powered by the Dremel query engine. Colossus: BigQuery Architecture relies on Colossus, Google’s latest generation distributed file system. Each Google data center has its own Colossus cluster, and each Colossus cluster has enough disks to give every BigQuery user thousands of dedicated disks at a time. Colossus also handles replication, recovery (when disks crash), and distributed management. Jupiter Network: It is the internal data center network that allows BigQuery to separate storage and compute. Data Model/Storage Columnar storage. Nested/Repeated fields. No Index: Single full table scan. Query Execution The query is implemented in Tree Architecture. The query is executed using tens of thousands of machines over a fast Google Network. What is the BigQuery’s Columnar Database? Google BigQuery Architecture uses column-based storage or columnar storage structure that helps it achieve faster query processing with fewer resources. It is the main reason why Google BigQuery handles large datasets quantities and delivers excellent speed. Row-based storage structure is used in Relational Databases where data is stored in rows because it is an efficient way of storing data for transactional Databases. Storing data in columns is efficient for analytical purposes because it needs a faster data reading speed. Suppose a Database has 1000 records or 1000 columns of data. If we store data in a row-based structure, then querying only 10 rows out of 1000 will take more time as it will read all the 1000 rows to get 10 rows in the query output. But this is not the case in Google BigQuery’s Columnar Database, where all the data is stored in columns instead of rows. The columnar database will process only 100 columns in the interest of the query, which in turn makes the overall query processing faster. The Google Ecosystem Google BigQuery is a Cloud Data Warehouse that is a part of Google Cloud Platform (GCP) which means it can easily integrate with other Google products and services. Google Cloud Platforms is a package of many Google services used to store data such as Google Cloud Storage, Google Bigtable, Google Drive, Databases, and other Data processing tools. Google BigQuery can process all the data stored in these other Google products. Google BigQuery uses standard SQL queries to create and execute Machine Learning models and integrate with other Business Intelligence tools like Looker and Tableau. Google BigQuery Comparison with Other Database and Data Warehouses Here, you will be looking at how Google BigQuery is different from other Databases and Data Warehouses: 1) Comparison with MapReduce and NoSQL MapReduce vs. Google BigQuery NoSQL Datastore vs. Google BigQuery 2) Comparison with Redshift and Snowflake Some Important Considerations about these Comparisons: If you have a reasonable volume of data, say, dozens of terabytes that you rarely use to perform queries and it’s acceptable for you to have query response times of up to a few minutes when you use, then Google BigQuery is an excellent candidate for your scenario. If you need to analyze a big amount of data (e.g.: up to a few terabytes) by running many queries which should be answered each very quickly — and you don’t need to keep the data available once the analysis is done, then an on-demand cloud solution like Amazon Redshift is a great fit. But keep in mind that differently from Google BigQuery, Redshift does need to be configured and tuned in order to perform well. BigQuery Architecture is good enough if not to take into account the speed of data updating. Compared to Redshift, Google BigQuery only supports hourly syncs as its fastest frequency update. This made us choose Redshift, as we needed the solution with the support of close to real-time data integration. Key Concepts of Google BigQuery Now, you will get to know about the key concepts associated with Google BigQuery: 1) Working BigQuery is a data warehouse, implying a degree of centralization. The query we demonstrated in the previous section was applied to a single dataset. However, the benefits of BigQuery become even more apparent when we do joins of datasets from completely different sources or when we query against data that is stored outside BigQuery. If you’re a power user of Sheets, you’ll probably appreciate the ability to do more fine-grained research with data in your spreadsheets. It’s a sensible enhancement for Google to make, as it unites BigQuery with more of Google’s own existing services. Previously, Google made it possible to analyse Google Analytics data in BigQuery. These sorts of integrations could make BigQuery Architecture a better choice in the market for cloud-based data warehouses, which is increasingly how Google has positioned BigQuery. Public cloud market leader Amazon Web Services (AWS) has Redshift, but no widely used tool for spreadsheets. Microsoft Azure’s SQL Data Warehouse, which has been in preview for several months, does not currently have an official integration with Microsoft Excel, surprising though it may be. 2) Querying Google BigQuery Architecture supports SQL queries and supports compatibility with ANSI SQL 2011. BigQuery SQL support has been extended to support nested and repeated field types as part of the data model. For example, you can use GitHub public dataset and issue the UNNEST command. It lets you iterate over a repeated field. SELECT name, count(1) as num_repos FROM `bigquery-public-data.github_repos.languages`, UNNEST(language) GROUP BY name ORDER BY num_repos DESC limit 10 A) Interactive Queries Google BigQuery Architecture supports interactive querying of datasets and provides you with a consolidated view of these datasets across projects that you can access. Features like saving as and shared ad-hoc, exploring tables and schemas, etc. are provided by the console. B) Automated Queries You can automate the execution of your queries based on an event and cache the result for later use. You can use Airflow API to orchestrate automated activities. For simple orchestrations, you can use corn jobs. To encapsulate a query as an App Engine App and run it as a scheduled cron job you can refer to this blog. C) Query Optimization Each time a Google BigQuery executes a query, it executes a full-column scan. It doesn’t support indexes. As you know, the performance and query cost of Google BigQuery Architecture is dependent on the amount of data scanned during a query, you need to design your queries to reference the column that is strictly relevant to your query. When you are using data partitioned tables, make sure that only the relevant partitions are scanned. You can also refer to the detailed blog here that can help you to understand the performance characteristics after a query executes. D) External sources With federated data sources, you can run queries on the data that exists outside of your Google BigQuery. But this method has performance implications. You can also use query federation to perform the ETL process from an external source to Google BigQuery. E) User-defined functions Google BigQuery supports user-defined functions for queries that can exceed the complexity of SQL. User-defined functions allow you to extend the built-in SQL functions easily. It is written in JavaScript. It can take a list of values and then return a single value. F) Query sharing Collaborators can save and share the queries between the team members. Data exploration exercise, getting desired speed on a new dataset or query pattern becomes a cakewalk with it. 3) ETL/Data Load There are various approaches to loading data to BigQuery. In case you are moving data from Google Applications – like Google Analytics, Google Adwords, etc. google provides a robust BigQuery Data Transfer Service. This is Google’s own intra-product data migration tool. Data load from other data sources – databases, cloud applications, and more can be accomplished by deploying engineering resources to write custom scripts. The broad steps would be to extract data from the data source, transform it into a format that BigQuery accepts, upload this data to Google Cloud Storage (GCS) and finally load this to Google BigQuery from GCS. A few examples of how to perform this can be found here –> PostgreSQL to BigQuery and SQL Server to BigQuery A word of caution though – custom coding scripts to move data to Google BigQuery is both a complex and cumbersome process. A third-party data pipeline platform such as LIKE.TG can make this a hassle-free process for you. Simplify ETL Using LIKE.TG ’s No-code Data Pipeline LIKE.TG Data helps you directly transfer data from 150+ other data sources (including 40+ free sources) to Business Intelligence tools, Data Warehouses, or a destination of your choice in a completely hassle-free & automated manner. LIKE.TG is fully managed and completely automates the process of not only loading data from your desired source but also enriching the data and transforming it into an analysis-ready form without having to write a single line of code. Its fault-tolerant architecture ensures that the data is handled in a secure, consistent manner with zero data loss. LIKE.TG takes care of all your data preprocessing needs required to set up the integration and lets you focus on key business activities and draw a much more powerful insight on how to generate more leads, retain customers, and take your business to new heights of profitability. It provides a consistent & reliable solution to manage data in real-time and always have analysis-ready data in your desired destination. Get Started with LIKE.TG for Free 4) Pricing Model A) Google BigQuery Storage Cost Active – Monthly charge for stored data modified within 90 days. Long-term – Monthly charge for stored data that have not been modified within 90 days. This is usually lower than the earlier one. B) Google BigQuery Query Cost On-demand – Based on data usage. Flat rate – Fixed monthly cost, ideal for enterprise users. Free usage is available for the below operations: Loading data (network pricing policy applicable in case of inter-region). Copying data. Exporting data. Deleting datasets. Metadata operations. Deleting tables, views, and partitions. 5) Maintenance Google has managed to solve a lot of common data warehouse concerns by throwing order of magnitude of hardware at the existing problems and thus eliminating them altogether. Unlike Amazon Redshift, running VACUUM in Google BigQuery is not an option. Google BigQuery is specifically architected without the need for the resource-intensive VACUUM operation that is recommended for Redshift. BigQuery Pricing is way different compared to the redshift. Keep in mind that by design, Google BigQuery is append-only. Meaning, that when planning to update or delete data, you’ll need to truncate the entire table and recreate the table with new data. However, Google has implemented ways in which users can reduce the amount of data processed. Partition their tables by specifying the partition date in their queries. Use wildcard tables to share their data by an attribute. 6) Security The fastest hardware and most advanced software are of little use if you can’t trust them with your data. BigQuery’s security model is tightly integrated with the rest of Google’s Cloud Platform, so it is possible to take a holistic view of your data security. BigQuery uses Google’s Identity and Access Management (IAM) access control system to assign specific permissions to individual users or groups of users. BigQuery also ties in tightly with Google’s Virtual Private Cloud (VPC) policy controls, which can protect against users who try to access data from outside your organization, or who try to export it to third parties. Both IAM and VPC controls are designed to work across Google cloud products, so you don’t have to worry that certain products create a security hole. BigQuery is available in every region where Google Cloud has a presence, enabling you to process the data in the location of your choosing. At the time of writing, Google Cloud has more than two dozen data centers around the world, and new ones are being opened at a fast rate. If you have business reasons for keeping data in the US, it is possible to do so. Just create your dataset with the US region code, and all of your queries against the data will be done within that region. Know more about Google BigQuery security from here. 7) Features Some features of Google BigQuery Data Warehouse are listed below: Just upload your data and run SQL. No cluster deployment, no virtual machines, no setting keys or indexes, and no software. Separate storage and computing. No need to deploy multiple clusters and duplicate data into each one. Manage permissions on projects and datasets with access control lists. Seamlessly scales with usage. Compute scales with usage, without cluster resizing. Thousands of cores are used per query. Deployed across multiple data centers by default, with multiple factors of replication to optimize maximum data durability and service uptime. Stream millions of rows per second for real-time analysis. Analyze terabytes of data in seconds. Storage scales to Petabytes. 8) Interaction A) Web User Interface Run queries and examine results. Manage databases and tables. Save queries and share them across the organization for re-use. Detailed Query history. B) Visualize Data Studio View BigQuery results with charts, pivots, and dashboards. C) API A programmatic way to access Google BigQuery. D) Service Limits for Google BigQuery The concurrent rate limit for on-demand, interactive queries: 50. Daily query size limit: Unlimited by default. Daily destination table update limit: 1,000 updates per table per day. Query execution time limit: 6 hours. A maximum number of tables referenced per query: 1,000. Maximum unresolved query length: 256 KB. Maximum resolved query length: 12 MB. The concurrent rate limit for on-demand, interactive queries against Cloud Big table external data sources: 4. E) Integrating with Tensorflow BigQuery has a new feature BigQuery ML that let you create and use a simple Machine Learning (ML) model as well as deep learning prediction with the TensorFlow model. This is the key technology to integrate the scalable data warehouse with the power of ML. The solution enables a variety of smart data analytics, such as logistic regression on a large dataset, similarity search, and recommendation on images, documents, products, or users, by processing feature vectors of the contents. Or you can even run TensorFlow model prediction inside BigQuery. Now, imagine what would happen if you could use BigQuery for deep learning as well. After having data scientists train the cutting-edge intelligent neural network model with TensorFlow or Google Cloud Machine Learning, you can move the model to BigQuery and execute predictions with the model inside BigQuery. This means you can let any employee in your company use the power of BigQuery for their daily data analytics tasks, including image analytics and business data analytics on terabytes of data, processed in tens of seconds, solely on BigQuery without any engineering knowledge. 9) Performance Google BigQuery rose from Dremel, Google’s distributed query engine. Dremel held the capability to handle terabytes of data in seconds flat by leveraging distributed computing within a serverless BigQuery Architecture. This BigQuery architecture allows it to process complex queries with the help of multiple servers in parallel to significantly improve processing speed. In the following sections, you will take a look at the 4 critical components of Google BigQuery performance: Tree Architecture Serverless Service SQL and Programming Language Support Real-time Analytics Tree Architecture BigQuery Architecture and Dremel can scale to thousands of machines by structuring computations as an execution tree. A root server receives an incoming query and relays it to branches, also known as mixers, which modify incoming queries and deliver them to leaf nodes, also known as slots. Working in parallel, the leaf nodes handle the nitty-gritty of filtering and reading the data. The results are then moved back down the tree where the mixers accumulate the results and send them to the root as the answer to the query. Serverless Service In most Data Warehouse environments, organizations have to specify and commit to the server hardware on which computations are run. Administrators have to provision for performance, elasticity, security, and reliability. A serverless model can come in handy in solving this constraint. In a serverless model, processing can automatically be distributed over a large number of machines working simultaneously. By leveraging Google BigQuery’s serverless model, database administrators and data engineers can focus less on infrastructure and more on provisioning servers and extracting actionable insights from data. SQL and Programming Language Support Users can avail BigQuery Architecture through standard-SQL, which many users are quite familiar with. Google BigQuery also has client libraries for writing applications that can access data in Python, Java, Go, C#, PHP, Ruby, and Node.js. Real-time Analytics Google BigQuery can also run and process reports on real-time data by using other GCP resources and services. Data Warehouses can provide support for analytics after data from multiple sources is accumulated and stored- which can often happen in batches throughout the day. Apart from Batch Processing, Google BigQuery Architecture also supports streaming at a rate of millions of rows of data every second. 10) Use Cases You can use Google BigQuery Data Warehouse in the following cases: Use it when you have queries that run more than five seconds in a relational database. The idea of BigQuery is running complex analytical queries, which means there is no point in running queries that are doing simple aggregation or filtering. BigQuery is suitable for “heavy” queries, those that operate using a big set of data. The bigger the dataset, the more you’re likely to gain performance by using BigQuery. The dataset that I used was only 330 MB (megabytes, not even gigabytes). BigQuery is good for scenarios where data does not change often and you want to use the cache, as it has a built-in cache. What does this mean? If you run the same query and the data in tables are not changed (updated), BigQuery will just use cached results and will not try to execute the query again. Also, BigQuery is not charging money for cached queries. You can also use BigQuery when you want to reduce the load on your relational database. Analytical queries are “heavy” and overusing them under a relational database can lead to performance issues. So, you could eventually be forced to think about scaling your server. However, with BigQuery you can move these running queries to a third-party service, so they would not affect your main relational database. Conclusion BigQuery is a sophisticated mature service that has been around for many years. It is feature-rich, economical, and fast. BigQuery integration with Google Drive and the free Data Studio visualization toolset is very useful for comprehension and analysis of Big Data and can process several terabytes of data within a few seconds. This service needs to deploy across existing and future Google Cloud Platform (GCP) regions. Serverless is certainly the next best option to obtain maximized query performance with minimal infrastructure cost. If you want to integrate your data from various sources and load it in Google BigQuery, then try LIKE.TG . Visit our Website to Explore LIKE.TG Businesses can use automated platforms like LIKE.TG Data to set the integration and handle the ETL process. It helps you directly transfer data from various Data Sources to a Data Warehouse, Business Intelligence tools, or any other desired destination in a fully automated and secure manner without having to write any code and will provide you with a hassle-free experience. Want to take LIKE.TG for a spin? Sign Up for a 14-day free trial and experience the feature-rich LIKE.TG suite first hand. You can also have a look at the unbeatable pricing that will help you choose the right plan for your business needs. So, what are your thoughts on Google BigQuery? Let us know in the comments
 Google BigQuery ETL: 11 Best Practices For High Performance
Google BigQuery ETL: 11 Best Practices For High Performance
Google BigQuery – a fully managed Cloud Data Warehouse for analytics from Google Cloud Platform (GCP), is one of the most popular Cloud-based analytics solutions. Due to its unique architecture and seamless integration with other services from GCP, there are certain best practices to be considered while configuring Google BigQuery ETL (Extract, Transform, Load) & migrating data to BigQuery. This article will give you a birds-eye on how Google BigQuery can enhance the ETL Process in a seamless manner. Read along to discover how you can use Google BigQuery ETL for your organization! Best Practices to Perform Google BigQuery ETL Given below are 11 Best Practices & Strategies individuals can use to perform Google BigQuery ETL: GCS as a Staging Area for BigQuery Upload Handling Nested and Repeated Data Data Compression Best Practices Time Series Data and Table Partitioning Streaming Insert Bulk Updates Transforming Data after Load (ELT) Federated Tables for Adhoc Analysis Access Control and Data Encryption Character Encoding Backup and Restore Simplify BigQuery ETL with LIKE.TG ’s no-code Data Pipeline LIKE.TG is the only real-time ELT No-code Data Pipeline platform that cost-effectively automates data pipelines that are flexible to your needs. With integration with 150+ Data Sources (40+ free sources), we help you not only export data from sources & load data to the destinations but also transform & enrich your data, & make it analysis-ready. Get Started with LIKE.TG for Free Its completely automated pipeline offers data to be delivered in real-time without any loss from source to destination. Its fault-tolerant and scalable architecture ensure that the data is handled in a secure, consistent manner with zero data loss and supports different forms of data. SIGN UP HERE FOR A 14-DAY FREE TRIAL 1. GCS – Staging Area for BigQuery Upload Unless you are directly loading data from your local machine, the first step in Google BigQuery ETL is to upload data to GCS. To move data to GCS you have multiple options: Gsutil is a command line tool which can be used to upload data to GCS from different servers. If your data is present in any online data sources like AWS S3 you can use Storage Transfer Service from Google cloud. This service has options to schedule transfer jobs. Other things to be noted while loading data to GCS: GCS bucket and Google BigQuery dataset should be in the same location with one exception – If the dataset is in the US multi-regional location, data can be loaded from GCS bucket in any regional or multi-regional location. The format supported to upload from GCS to Google BigQuery are – Comma-separated values (CSV), JSON (newline-delimited), Avro, Parquet, ORC, Cloud Datastore exports, Cloud Firestore exports. 2. Nested and Repeated Data This is one of the most important Google BigQuery ETL best practices. Google BigQuery performs best when the data is denormalized. Instead of keeping relations, denormalize the data and take advantage of nested and repeated fields. Nested and repeated fields are supported in Avro, Parquet, ORC, JSON (newline delimited) formats. STRUCT is the type that can be used to represent an object which can be nested and ARRAY is the type to be used for the repeated value. For example, the following row from a BigQuery table is an array of a struct: { "id": "1", "first_name": "Ramesh", "last_name": "Singh", "dob": "1998-01-22", "addresses": [ { "status": "current", "address": "123 First Avenue", "city": "Pittsburgh", "state": "WA", "zip": "11111", "numberOfYears": "1" }, { "status": "previous", "address": "456 Main Street", "city": "Pennsylvania", "state": "OR", "zip": "22222", "numberOfYears": "5" } ] } 3. Data Compression The next vital Google BigQuery ETL best practice is on Data Compression. Most of the time the data will be compressed before transfer. You should consider the below points while compressing data. The binary Avro is the most efficient format for loading compressed data. Parquet and ORC format are also good as they can be loaded in parallel. For CSV and JSON, Google BigQuery can load uncompressed files significantly faster than compressed files because uncompressed files can be read in parallel. 4. Time Series Data and Table Partitioning Time Series data is a generic term used to indicate a sequence of data points paired with timestamps. Common examples are clickstream events from a website or transactions from a Point Of Sale machine. The velocity of this kind of data is much higher and volume increases over time. Partitioning is a common technique used to efficiently analyze time-series data and Google BigQuery has good support for this with partitioned tables. Partitioned Tables are crucial in Google BigQuery ETL operations because it helps in the Storage of data. A partitioned table is a special Google BigQuery table that is divided into segments often called as partitions. It is important to partition bigger table for better maintainability and query performance. It also helps to control costs by reducing the amount of data read by a query. Automated tools like LIKE.TG Data can help you partition BigQuery ETL tables within the UI only which helps streamline your ETL even faster. To learn more about partitioning in Google BigQuery, you can read our blog here. Google BigQuery has mainly three options to partition a table: Ingestion-time partitioned tables – For these type of table BigQuery automatically loads data into daily, date-based partitions that reflect the data’s ingestion date. A pseudo column named _PARTITIONTIME will have this date information and can be used in queries. Partitioned tables – Most common type of partitioning which is based on TIMESTAMP or DATE column. Data is written to a partition based on the date value in that column. Queries can specify predicate filters based on this partitioning column to reduce the amount of data scanned. You should use the date or timestamp column which is most frequently used in queries as partition column. Partition column should also distribute data evenly across each partition. Make sure it has enough cardinality. Also, note that the Maximum number of partitions per partitioned table is 4,000. Legacy SQL is not supported for querying or for writing query results to partitioned tables. Sharded Tables – You can also think of shard tables using a time-based naming approach such as [PREFIX]_YYYYMMDD and use a UNION while selecting data. Generally, Partitioned tables perform better than tables sharded by date. However, if you have any specific use-case to have multiple tables you can use sharded tables. Ingestion-time partitioned tables can be tricky if you are inserting data again as part of some bug fix. 5. Streaming Insert The next vital Google BigQuery ETL best practice is on actually inserting data. For inserting data into a Google BigQuery table in batch mode a load job will be created which will read data from the source and insert it into the table. Streaming data will enable us to query data without any delay in the load job. Stream insert can be performed on any Google BigQuery table using Cloud SDKs or other GCP services like Dataflow (Dataflow is an auto-scalable stream and batch data processing service from GCP ). The following things should be noted while performing stream insert: Streaming data is available for the query after a few seconds of the first stream inserted in the table. Data takes up to 90 minutes to become available for copy and export. While streaming to a partitioned table, the value of _PARTITIONTIME pseudo column will be NULL. While streaming to a table partitioned on a DATE or TIMESTAMP column, the value in that column should be between 1 year in the past and 6 months in the future. Data outside this range will be rejected. 6. Bulk Updates Google BigQuery has quotas and limits for DML statements which is getting increased over time. As of now the limit of combined INSERT, UPDATE, DELETE and MERGE statements per day per table is 1,000. Note that this is not the number of rows. This is the number of the statement and as you know, one single DML statement can affect millions of rows. Now within this limit, you can run updates or merge statements affecting any number of rows. It will not affect any query performance, unlike many other analytical solutions. 7. Transforming Data after Load (ELT) Google BigQuery ETL must also address ELT in some scenarios as ELT is the popular methodology now. Sometimes it is really handy to transform data within Google BigQuery using SQL, which is often referred to as Extract Load Transfer (ELT). BigQuery supports both INSERT INTO SELECT and CREATE TABLE AS SELECT methods to data transfer across tables. INSERT das.DetailedInve (product, quantity) VALUES('television 50', (SELECT quantity FROM ds.DetailedInv WHERE product = 'television')) CREATE TABLE mydataset.top_words AS SELECT corpus,ARRAY_AGG(STRUCT(word, word_count)) AS top_words FROM bigquery-public-data.samples.shakespeare GROUP BY corpus; 8. Federated Tables for Adhoc Analysis You can directly query data stored in the location below from BigQuery which is called federated data sources or tables. Cloud BigTable GCS Google Drive Things to be noted while using this option: Query performance might not be good as the native Google BigQuery table. No consistency is guaranteed in case of external data is changed while querying. Can’t export data from an external data source using BigQuery Job. Currently, Parquet or ORC format is not supported. The query result is not cached, unlike native BigQuery tables. 9. Access Control and Data Encryption Data stored in Google BigQuery is encrypted by default and keys are managed by GCP Alternatively customers can manage keys using the Google KMS service. To grant access to resources, BigQuery uses IAM(Identity and Access Management) to the dataset level. Tables and views are child resources of datasets and inherit permission from the dataset. There are predefined roles like bigquery.dataViewer and bigquery.dataEditor or the user can create custom roles. 10. Character Encoding Sometimes it will take some time to get the correct character encoding scheme while transferring data. Take notes of the points mentioned below as it will help you to get them correct in the first place. To perform Google BigQuery ETL, all source data should be UTF-8 encoded with the below exception If a CSV file with data encoded in ISO-8859-1 format, it should be specified and BigQuery will properly convert the data to UTF-8 Delimiters should be encoded as ISO-8859-1 Non-convertible characters will be replaced with Unicode replacement characters: � 11. Backup and Restore Google BigQuery ETL addresses backup and disaster recovery at the service level. The user does not need to worry about it. Still, Google BigQuery is maintaining a complete 7-day history of changes against tables and allows to query a point-in-time snapshot of the table. Concerns when using BigQuery You should be aware of potential issues or difficulties. You may create better data pipelines and data solutions where these problems can be solved by having a deeper understanding of these concerns. Limited data type support BigQuery does not accept arrays, structs, or maps as data types. Therefore, in order to make such data suitable with your data analysis requirements, you will need to modify them. Dealing with unstructured data When working with unstructured data in BigQuery, you need to account for extra optimisation activities or transformational stages. BigQuery handles structured and semi-structured data with ease. However, unstructured data might make things a little more difficult. Complicated workflow Getting started with BigQuery’s workflow function may be challenging for novices, particularly if they are unfamiliar with fundamental SQL or other aspects of data processing. Lack of support for Modify/Update delete operations on individual rows To change any row, you have to either alter the entire table or utilize an insert, update, and delete combo. Serial operations BigQuery is well-suited to processing bulk queries in parallel. However, if you try to conduct serial operations, you can discover that it performs worse. Daily table update limit A table can be updated up to 1000 times in a day by default. You will need to request and raise the quota in order to get more updates. Common Stages in a BigQuery ELT Pipeline Let’s look into the typical steps in a BigQuery ELT pipeline: Transferring data from file systems, local storage, or any other media Data loading into Google Cloud Platform services (GCP) Data loading into BigQuery Data transformation using methods, processes, or SQL queries There are two methods for achieving data transformation with BigQuery: Using Data Transfer Services This method loads data into BigQuery using GCP native services, and SQL handles the transformation duties after that. Using GCS In this method, tools such as Distcp, Sqoop, Spark Jobs, GSUtil, and others are used to load data into the GCS (Google Cloud Storage) bucket. In this method, SQL may also do the change. Conclusion In this article, you have learned 11 best practices you can employ to perform. Google BigQuery ETL operations. However, performing these operations manually time and again can be very taxing and is not feasible. You will need to implement them manually, which will consume your time & resources, and writing custom scripts can be error-prone. Moreover, you need full working knowledge of the backend tools to successfully implement the in-house Data transfer mechanism. You will also have to regularly map your new files to the Google BigQuery Data Warehouse. Want to take LIKE.TG for a spin? Sign Up for a 14-day free trial and experience the feature-rich LIKE.TG suite first hand. Checkout LIKE.TG pricing and find a plan that suits you best. Have any further queries? Get in touch with us in the comments section below.
 Google Sheets to BigQuery: 3 Ways to Connect & Migrate Data
Google Sheets to BigQuery: 3 Ways to Connect & Migrate Data
As your company grows and starts generating terabytes of complex data, and you have data stored in different sources. That’s when you have to incorporate a data warehouse like BigQuery into your data architecture for migrating data from Google Sheets to BigQuery. Sieving through terabytes of data on sheets is quite a monotonous endeavor and places a ceiling on what is achievable when it comes to data analysis. At this juncture incorporating a data warehouse like BigQuery becomes a necessity. In this blog post, we will be covering extensively how you can move data from Google Sheets to BigQuery. Methods to Connect Google Sheets to BigQuery Now that we have built some background information on the spreadsheets and why it is important to incorporate BigQuery into your data architecture, next we will look at how to import data. Here, it is assumed that you already have a GCP account. If you don’t already have one, you can set it up. Google offers new users $300 free credits for a year. You can always use these free credits to get a feel of GCP and access BigQuery. Method 1: Using LIKE.TG to Move Data from Google Sheets to BigQuery LIKE.TG is the only real-time ELT No-code data pipeline platform that cost-effectively automates data pipelines that are flexible to your needs. Using a fully managed platform like LIKE.TG you bypass all the aforementioned complexities and (supports as a free data source) import Google Sheet to BigQuery in just a few mins. You can achieve this in 2 simple steps: Step 1: Configure Google Sheets as a source, by entering the Pipeline Name and the spreadsheet you wish to replicate. Step 2: Connect to your BigQuery account and start moving your data from Google Sheets to BigQuery by providing the project ID, dataset ID, Data Warehouse name, and GCS bucket. For more details, Check out: Google Sheets Source Connector BigQuery Destinations Connector Key features of LIKE.TG are, Data Transformation: It provides a simple interface to perfect, modify, and enrich the data you want to transfer. Schema Management: LIKE.TG can automatically detect the schema of the incoming data and maps it to the destination schema. Incremental Data Load: LIKE.TG allows the transfer of data that has been modified in real-time. This ensures efficient utilization of bandwidth on both ends. Method 2: Using BigQuery Connector to Move Data from Google Sheets to BigQuery You can easily upload using BigQuery’s data connector. The steps below illustrate how: Step 1: Log in to your GCP console and Navigate to the BigQuery UI using the hamburger menu. Step 2: Inside BigQuery, select ‘Create Dataset’. Step 3: After creating the dataset, next up we create a BigQuery table that will contain our incoming data from sheets. To create BigQuery table from Google Sheet, click on ‘Create a table.’ In the ‘create a table‘ tab, select Drive. Step 4: Under the source window, choose Google Drive as your source and populate the Select Drive URL tab with the URL from your Google Sheet. You can select either CSV or Sheets as the format. Both formats allow you to select the auto-detect schema. You could also specify the column names and data types. Step 5: Fill in the table name and select ‘Create a table.’ With your Google Sheets linked to your Google BigQuery, you can always commit changes to your sheet and it will automatically appear in Google BigQuery. Step 6: Now that we have data in BigQuery, we can perform SQL queries on our ingested data. The following image shows a short query we performed on the data in BigQuery. Method 3: Using Sheets Connector to Move Data from Google Sheets to BigQuery This method to upload Google Sheet to BigQuer is only available for Business, Enterprise, or Education G Suite accounts. This method allows you to save your SQL queries directly into your Google Sheets. Steps to using the Sheet’s data connector are highlighted below with the help of a public dataset: Step 1: For starters, open or create a Google Sheets spreadsheet. Step 2: Next, click on Data > Data Connectors > Connect to BigQuery. Step 3: Click Get Connected, and select a Google Cloud project with billing enabled. Step 4: Next, click on Public Datasets. Type Chicago in the search box, and then select the Chicago_taxi_trips dataset. From this dataset choose the taxi_trips table and then click on the Connect button to finish this step. This is what your Google Sheets spreadsheet will look like: You can now use this spreadsheet to create formulas, charts, and pivot tables using various Google Sheets techniques. Managing Access and Controlling Share Settings It is pertinent that your data is protected across both Sheet and BigQuery, hence you can manage who has access to both the sheet and BigQuery. To do this; all you need to do is create a Google Group to serve as an access control group. By clicking the share icon on sheets, you can grant access to which of your team members can edit, view or comment. Whatever changes are made here will also be replicated on BigQuery. This will serve as a form of IAM for your data set. Limitations of using Sheets Connector to Connect Google Sheets to BigQuery In this blog post, we covered how you can incorporate BigQuery into Google Sheets in two ways so far. Despite the immeasurable benefits of the process, it has some limitations. This process cannot support volumes of data greater than 10,000 rows in a single spreadsheet. To make use of the sheets data connector for BigQuery, you need to operate a Business, Enterprise, or Education G suite account. This is an expensive option. Before wrapping up, let’s cover some basics. Introduction to Google Sheets Spreadsheets are electronic worksheets that contain rows and columns which users can input, manage and carry out mathematical operations on their data. It gives users the unique ability to create tables, charts, and graphs to perform analysis. Google Sheets is a spreadsheet program that is offered by Google as a part of their Google Docs Editor suite. This suite also includes Google Drawings, Google Slides, Google Forms, Google Docs, Google Keep, and Google Sites. Google Sheets gives you the option to choose from a vast variety of schedules, budgets, and other pre-made spreadsheets that are designed to make your work that much better and your life easier. Here are a few key features of Google Sheets In Google Sheets, all your changes are saved automatically as you type. You can use revision history to see old versions of the same spreadsheet. It is sorted by the people who made the change and the date. It also allows you to get instant insights with its Explore panel. It allows you to get an overview of data from a selection of pre-populated charts to informative summaries to choose from. Google Sheets allows everyone to work together in the same spreadsheet at the same time. You can create, access, and edit your spreadsheets wherever you go- from your tablet, phone, or computer. Introduction to BigQuery Google BigQuery is a data warehouse technology designed by Google to make data analysis more productive by providing fast SQL-querying for big data. The points below reiterate how BigQuery can help improve our overall data architecture: When it comes to Google BigQuery size is never a problem. You can analyze up to 1TB of data and store up to 10GB for free each month. BigQuery gives you the liberty to focus on analytics while fully abstracting all forms of infrastructure, so you can focus on what matters. Incorporating BigQuery into your architecture will open you to the services on GCP(Google Cloud Platform). GCP provides a suite of cloud services such as data storage, data analysis, and machine learning. With BigQuery in your architecture, you can apply Machine learning to your data by using BigQuery ML. If you and your team are collaborating on google sheets you can make use of Google Data Studio to build interactive dashboards and graphical rendering to better represent the data. These dashboards are updated as data is updated on the spreadsheet. BigQuery offers a strong security regime for all its users. It offers a 99.9% service level agreement and strictly adheres to privacy shield principles. GCP provides its users with Identity and Access Management (IAM), where you as the main user can decide the specific data each member of your team can access. BigQuery offers an elastic warehouse model that scales automatically according to your data size and query complexity. Additional Resources on Google Sheets to Bigquery Move Data from Excel to Bigquery Conclusion This blog talks about the 3 different methods you can use to move data from Google Sheets to BigQuery in a seamless fashion. In addition to Google Sheets, LIKE.TG can move data from a variety of Free & Paid Data Sources (Databases, Cloud Applications, SDKs, and more). LIKE.TG ensures that your data is consistently and securely moved from any source to BigQuery in real-time.
 Google Sheets to Snowflake: 2 Easy Methods
Google Sheets to Snowflake: 2 Easy Methods
Is your data in Google Sheets becoming too large for on-demand analytics? Are you struggling to combine data from multiple Google Sheets into a single source of truth for reports and analytics? If that’s the case, then your business may be ready for a move to a mature data platform like Snowflake. This post covers two approaches for migrating your data from Google Sheets to Snowflake. Snowflake Google Sheets integration facilitates data accessibility and collaboration by allowing information to be transferred and analyzed across the two platforms with ease. The following are the methods you can use to connect Google Sheets to Snowflake in a seamless fashion: Method 1: Using LIKE.TG Data to Connect Google Sheets to Snowflake LIKE.TG is the only real-time ELT No-code data pipeline platform that cost-effectively automates data pipelines that are flexible to your needs. With integration with 150+ Data Sources (40+ free sources), we help you not only export data from sources & load data to the destinations but also transform & enrich your data, & make it analysis-ready. Sign up here for a 14-Day Free Trial! LIKE.TG provides an easy-to-use data integration platform that works by building an automated pipeline in just two interactive steps: Step 1: Configure Google Sheets as a source, by entering the Pipeline Name and the spreadsheet you wish to replicate. Perform the following steps to configure Google Sheets as a Source in your Pipeline: Click PIPELINES in the Navigation Bar. Click + CREATE in the Pipelines List View. In the Select Source Type page, select Google Sheets. In the Configure your Google Sheets account page, to select the authentication method for connecting to Google Sheets, do one of the following: To connect with a User Account, do one of the following: Select a previously configured account and click CONTINUE. Click + ADD GOOGLE SHEETS ACCOUNT and perform the following steps to configure an account: Select the Google account associated with your Google Sheets data. Click Allow to authorize LIKE.TG to access the data. To connect with a Service Account, do one of the following: Select a previously configured account and click CONTINUE. Click the attach icon () to upload the Service Account Key and click CONFIGURE GOOGLE SHEETS ACCOUNT.Note: LIKE.TG supports only JSON format for the key file. In the Configure your Google Sheets Source page, specify the Pipeline Name, Sheets, Custom Header Row. Click TEST & CONTINUE. Proceed to configuring the data ingestion and setting up the Destination. Step 2: Create and Configure your Snowflake Warehouse LIKE.TG provides you with a ready-to-use script to configure the Snowflake warehouse you intend to use as the Destination. Follow these steps to run the script: Log in to your Snowflake account. In the top right corner of the Worksheets tab, click the + icon to create a new worksheet. Paste the script in the worksheet. The script creates a new role for LIKE.TG in your Snowflake Destination. Keeping your privacy in mind, the script grants only the bare minimum permissions required by LIKE.TG to load the data in your Destination. Replace the sample values provided in lines 2-7 of the script with your own to create your warehouse. These are the credentials that you will be using to connect your warehouse to LIKE.TG . You can specify a new warehouse, role, and or database name to create these now or use pre-existing ones to load data into. Press CMD + A (Mac) or CTRL + A (Windows) inside the worksheet area to select the script. Press CMD+return (Mac) or CTRL + Enter (Windows) to run the script. Once the script runs successfully, you can use the credentials from lines 2-7 of the script to connect your Snowflake warehouse to LIKE.TG . Step 3: Complete Google Sheets to Snowflake migration by providing your destination name, account name, region of your account, database username and password, database and schema name, and the Data Warehouse name. And LIKE.TG automatically takes care of the rest. It’s just that simple. You are now ready to start migrating data from Google Sheets to Snowflake in a hassle-free manner! You can also integrate data from numerous other free data sources like Google Sheets, Zendesk, etc. to the desired destination of your choice such as Snowflake in a jiff. LIKE.TG is also much faster, thanks to its highly optimized features and architecture. Some of the additional features you can also enjoy with LIKE.TG are: Transformations – LIKE.TG provides preload transformations through Python code. It also allows you to run transformation code for each event in the pipelines you set up. You need to edit the event object’s properties received in the transform method as a parameter to carry out the transformation. LIKE.TG also offers drag-and-drop transformations like Date and Control Functions, JSON, and Event Manipulation to name a few. These can be configured and tested before putting them to use. Monitoring and Data Management – LIKE.TG automatically manages your data loads and ensures you always have up-to-date and accurate data in Snowflake. Automatic Change Data Capture – LIKE.TG performs incremental data loads automatically through a number of in-built Change Data Capture mechanisms. This means, as and when data on Google Sheets changes, they are loaded onto Snowflake in real time. It just took us 2 weeks to completely transform from spreadsheets to a modern data stack. Thanks to LIKE.TG that helped us make this transition so smooth and quick. Now all the stakeholders of our management, sales, and marketing team can easily build and access their reports in just a few clicks. – Matthew Larner, Managing Director, ClickSend Method 2: Using Migration Scripts to Connect Google Sheets to Snowflake To migrate your data from Google Sheets to Snowflake, you may opt for a custom-built data migration script to get the job done. We will demonstrate this process in the next paragraphs. To proceed, you will need the following requirements. Step 1: Setting Up Google Sheets API Access for Google Sheets As a first step, you would need to set up Google Sheets API access for the affected Google Sheets. Start by doing the following: 1. Log in to the Google account that owns the Google Sheets 2. Point your browser to the Google Developer Console (copy and paste the following in your browser: console.developers.google.com) 3. After the console loads create a project by clicking the “Projects” dropdown and then clicking “New Project“ 4. Give your project a name and click “Create“ 5. After that, click “Enable APIs and Services“ 6. Search for “Google Sheets API” in the search bar that appears and select it 7. Click “Enable” to enable the Google Sheets API 8. Click on the “Credentials” option on the left navbar in the view that appears, then click “Create Credentials“, and finally select “Service Account“ 9. Provide a name for your service account. You will notice it generates an email format for the Service Account ID. In my example in the screenshot below, it is “[email protected]”. Take note of this value. The token “migrate-268012” is the name of the project I created while “gsheets-migration” is the name of my service account. In your case, these would be your own supplied values. 10. Click “Create” and fill out the remaining optional fields. Then click “Continue“ 11. In the view that appears, click “Create Key“, select the “JSON” option and click “Create” to download your key file (credentials). Please store it in a safe place. We will use this later when setting up our migration environment. 12. Finally, click “Done“. At this point, all that remains for the Google Sheets setup is the sharing of all the Google Sheets you wish to migrate with the email-format Service Account ID mentioned in step 9 above. Note: You can copy your Service Account ID from the “client-email” field of the credential file you downloaded. For this demonstration, I will be migrating a sheet called “data-uplink-logs” shown in the screenshot below. I will now share it with my Service Account ID:Click “Share” on the Google sheet, paste in your Service Account ID, and click “Send“. Repeat this process for all sheets you want to migrate. Ignore any “mail delivery subsystem failure” notifications you receive while sharing the sheets, as your Service Account ID is not designed to operate as a normal email address. Step 2: Configuring Target Database in Snowflake We’re now ready to get started on the Snowflake side of the configuration process, which is simpler. To begin, create a Snowflake account. Creating an account furnishes you with all the credentials you will need to access Snowflake from your migration script. Specifically: After creating your account, you will be redirected to your Cloud Console which will open up in your browser During the account creation process, you would have specified your chosen username and password. You would have also selected your preferred AWS region, which will be part of your account. Your Snowflake account is of the form <Your Account ID>.<AWS Region> and your Snowflake cloud console URL will be of the form https://<Your Account ID>.<AWS Region>.snowflakecomputing.com/ Prepare and store a JSON file with these credentials. It will have the following layout: { "user": "<Your Username>", "account": "<Your Account ID>.<AWS Region>", "password": "<Your Password>" } After storing the JSON file, take some time to create your target environment on Snowflake using the intuitive User Interface. You are initially assigned a Data Warehouse called COMPUTE_WH so you can go ahead and create a Database and tables in it. After providing a valid name for your database and clicking “Finish“, click the “Grant Privileges” button which will show the form in the screenshot below. Select the “Modify” privilege and assign it to your schema name (which is “PUBLIC” by default). Click “Grant“. Click “Cancel” if necessary, after that, to return the main view. The next step is to add a table to your newly created database. You do this by clicking the database name on the left display and then clicking on the “Create Table” button. This will pop up the form below for you to design your table: After designing your table, click “Finish” and then click on your table name to verify that your table was created as desired: Finally, open up a Worksheet pane, which will allow you to run queries on your table. Do this by clicking on the “Worksheets” icon, and then clicking on the “+” tab. You can now select your database from the left pane to start running queries. We will run queries from this view to verify that our data migration process is correctly writing our data from the Google sheet to this table. We are now ready to move on to the next step. Step 3: Preparing a Migration Environment on Linux Server In this step, we will configure a migration environment on our Linux server. SSH into your Linux instance. I am using a remote AWS EC2 instance running Ubuntu, so my SSH command is of the form ssh -i <keyfile>.pem ubuntu@<server_public_IP> Once in your instance, run sudo apt-get update to update the environment Next, create a folder for the migration project and enter it sudo mkdir migration-test; cd migration-test It’s now time to clone the migration script we created for this post: sudo git clone https://github.com/cmdimkpa/Google-Sheets-to-Snowflake-Data-Migration.git Enter the project directory and view contents with the command: cd Google-Sheets-to-Snowflake-Data-Migration; ls This reveals the following files: googlesheets.json: copy your saved Google Sheets API credentials into this file. snowflake.json: likewise, copy your saved Snowflake credentials into this file. migrate.py: this is the migration script. Using the Migration Script Before using the migration script (a Python script), we must ensure the required libraries for both Google Sheets and Snowflake are available in the migration environment. Python itself should already be installed – this is usually the case for Linux servers, but check and ensure it is installed before proceeding. To install the required packages, run the following commands: sudo apt-get install -y libssl-dev libffi-dev pip install --upgrade snowflake-connector-python pip install gspread oauth2client PyOpenSSL At this point, we are ready to run the migration script. The required command is of the form: sudo python migrate.py <Source Google Sheet Name> <Comma-separated list of columns in the Google Sheet to Copy> <Number of rows to copy each run> <Snowflake target Data Warehouse> <Snowflake target Database> <Snowflake target Table> <Snowflake target table Schema> <Comma-separated list of Snowflake target table fields> <Snowflake account role> For our example process, the command becomes: sudo python migrate.py data-uplink-logs A,B,C,D 24 COMPUTE_WH TEST_GSHEETS_MIGRATION GSHEETS_MIGRATION PUBLIC CLIENT_ID,NETWORK_TYPE,BYTES,UNIX_TIMESTAMP SYSADMIN To migrate 24 rows of incremental data (each run) from our test Google Sheet data-uplink-logs to our target Snowflake environment, we simply run the command above. The following is a screenshot of what follows: The reason we migrate only 24 rows at a time is to beat the rate limit for the free tier of the Google Sheets API. Depending on your plan, you may not have this restriction. Step 4: Testing the Migration Process To test that the migration ran successfully, we simply go to our Snowflake Worksheet which we opened earlier, and run the following SQL query: SELECT * FROM TEST_GSHEETS_MIGRATION.PUBLIC.GSHEETS_MIGRATION Indeed, the data is there. So the data migration effort was successful. Step 5: Run CRON Jobs As a final step, run cron jobs as required to have the migrations occur on a schedule. We cannot cover the creation of cron jobs here, as it is beyond the scope of this post. This concludes the first approach! I hope you were as excited reading that as I was, writing it. It’s been an interesting journey, now let’s review the drawbacks of this approach. Limitations of using Migration Scripts to Connect Google Sheets to Snowflake The migration script approach to connect google sheets to Snowflake works well, but has the following drawbacks: This approach would need to pull out a few engineers to set up and test this infrastructure. Once built, you would also need to have a dedicated engineering team that can constantly monitor the infra and provide immediate support if and when something breaks. Aside from the setup process which can be intricate depending on experience, this approach creates new requirements such as: The need to monitor the logs and ensure the uptime of the migration processes. Fine-tuning of the cron jobs to ensure optimal data transmission with respect to the data inflow rates of the different Google sheets, any Google Sheet API rate limits, and the latency requirements of the reporting or analytics processes running on Snowflake or elsewhere. Download the Cheatsheet on How to Set Up ETL to Snowflake Learn the best practices and considerations for setting up high-performance ETL to Snowflake Method 3: Connect Google Sheets to Snowflake Using Python In this method, you will use Python to load data from Google Sheets to Snowflake. To do this, you will have to enable public access to your Google Sheets. You can do this by going to File>> Share >> Publish to web. After publishing to web, you will see a link in the format of https://docs.google.com/spreadsheets/d/{your_google_sheets_id}/edit#gid=0 You would need to install certain libraries in order to read this data, transform it into a dataframe, and write to Snowflake. Snowflake.connector and Pyarrow are the other two, while Pandas is the first. Installing pandas may be done with pip install pandas. The command pip install snowflake-connector-python may also be used to install Snowflake connector. The command pip install pyarrow may be used to install Pyarrow. You may use the following code to read the data from your Google Sheets. import pandas as pd data=pd.read_csv(f'https://docs.google.com/spreadsheets/d/{your_google_sheets_id}/pub?output=csv') In the code above, you will replace {your_google_sheets_id} with the id from your spreadsheet. You can preview the data by running the command data.head() You can also check out the number of columns and records by running data.shape Setting up Snowflake login credentials You will need to set up a data warehouse, database, schema, and table on your Snowflake account. Data loading in Snowflake You would need to utilize the Snowflake connection that was previously installed in Python in order to import the data into Snowflake. When you run write_to_snowflake(data), you will ingest all the data into your Snowflake data warehouse. Disadvantages Of Using ETL Scripts There are a variety of challenges and drawbacks when integrating data from sources like Google Sheets to Snowflake using ETL (Extract, Transform, Load) procedures, especially for businesses with little funding or experience. Price is the primary factor to be considered. Implementation and upkeep of the ETL technique can be expensive. It demands investments in personnel with the necessary skills to efficiently design, develop, and oversee these processes in addition to technology. Complexity is an additional problem. ETL processes may be intricate and challenging to configure properly. Companies without the necessary expertise may find it difficult to properly manage data conversions and interfaces. ETL processes can have limitations on scalability and flexibility. They might not be able to handle unstructured data well or provide real-time data streams, which makes them inappropriate. Conclusion This blog talks about the two different methods you can use to connect Google Sheets Snowflake integration in a seamless fashion: using migration scripts and with the help of a third-party tool, LIKE.TG . Visit our Website to Explore LIKE.TG Extracting complex data from a diverse set of data sources can be a challenging task and this is where LIKE.TG saves the day! LIKE.TG offers a faster way to move data from Databases or SaaS applications such as MongoDB into your Data Warehouse like Snowflake to be visualized in a BI tool. LIKE.TG is fully automated and hence does not require you to code. As we have seen, LIKE.TG greatly simplifies the process of migrating data from your Google Sheets to Snowflake or indeed any other source and destination. Sign Up for your 14-day free trial and experience stress-free data migration today! You can also have a look at the unbeatable LIKE.TG Pricing that will help you choose the right plan for your business needs.
 How to Connect Data from MongoDb to BigQuery in 2 Easy Methods
How to Connect Data from MongoDb to BigQuery in 2 Easy Methods
MongoDB is a popular NoSQL database that requires data to be modeled in JSON format. If your application’s data model has a natural fit to MongoDB’s recommended data model, it can provide good performance, flexibility, and scalability for transaction types of workloads. However, due to a few restrictions that you can face while analyzing data, it is highly recommended to stream data from MongoDB to BigQuery or any other data warehouse. MongoDB doesn’t have proper join, getting data from other systems to MongoDB will be difficult, and it also has no native support for SQL. MongoDB’s aggregation framework is not as easy to draft complex analytics logic as in SQL. The article provides steps to migrate data from MongoDB to BigQuery. It also talks about LIKE.TG Data, making it easier to replicate data. Therefore, without any further ado, let’s start learning about this MongoDB to BigQuery ETL. What is MongoDB? MongoDB is a popular NoSQL database management system known for its flexibility, scalability, and ease of use. It stores data in flexible, JSON-like documents, making it suitable for handling a variety of data types and structures. MongoDB is commonly used in modern web applications, data analytics, real-time processing, and other scenarios where flexibility and scalability are essential. What is BigQuery? BigQuery is a fully managed, serverless data warehouse and analytics platform provided by Google Cloud. It is designed to handle large-scale data analytics workloads and allows users to run SQL-like queries against multi-terabyte datasets in a matter of seconds. BigQuery supports real-time data streaming for analysis, integrates with other Google Cloud services, and offers advanced features like machine learning integration, data visualization, and data sharing capabilities. Prerequisites mongoexport (for exporting data from MongoDB) a BigQuery dataset a Google Cloud Platform account LIKE.TG free-trial account Methods to move Data from MongoDB to BigQuery Method 1: Using LIKE.TG Data to Set up MongoDB to BigQuery Method 2: Manual Steps to Stream Data from MongoDB to BigQuery Method 1: Using LIKE.TG Data to Set up MongoDB to BigQuery Sync your Data from MongoDB to BigQueryGet a DemoTry itSync your Data from HubSpot to BigQueryGet a DemoTry itSync your Data from Google Ads to BigQueryGet a DemoTry itSync your Data from Google Analytics 4 to BigQueryGet a DemoTry it Step 1: Select the Source Type To select MongoDB as the Source: Click PIPELINES in the Asset Palette. Click + CREATE in the Pipelines List View. In the Select Source Type page, select the MongoDB variant. Step 2: Select the MongoDB Variant Select the MongoDB service provider that you use to manage your MongoDB databases: Generic Mongo Database: Database management is done at your end, or by a service provider other than MongoDB Atlas. MongoDB Atlas: The managed database service from MongoDB. Step 3: Specify MongoDB Connection Settings Refer to the following sections based on your MongoDB deployment: Generic MongoDB. MongoDB Atlas. In the Configure your MongoDB Source page, specify the following: Step 4: Configure BigQuery Connection Settings Now Select Google BigQuery as your destination and start moving your data. You can modify only some of the settings you provide here once the Destination is created. Refer to the section Modifying BigQuery Destination Configuration below for more information. Click DESTINATIONS in the Asset Palette. Click + CREATE in the Destinations List View. In the Add Destination page, select Google BigQuery as the Destination type. In the Configure your Google BigQuery Account page, select the authentication method for connecting to BigQuery. In the Configure your Google BigQuery Warehouse page, specify the following details. By following the above mentioned steps, you will have successfully completed MongoDB BigQuery replication. With continuous Real-Time data movement, LIKE.TG allows you to combine MongoDB data with your other data sources and seamlessly load it to BigQuery with a no-code, easy-to-setup interface. Try our 14-day full-feature access free trial! Method 2: Manual Steps to Stream Data from MongoDB to BigQuery For the manual method, you will need some prerequisites, like: MongoDB environment: You should have a MongoDB account with a dataset and collection created in it. Tools like MongoDB compass and tool kit should be installed on your system. You should have access to MongoDB, including the connection string required to establish a connection using the command line. Google Cloud Environment Google Cloud SDK A Google Cloud project created with billing enabled Google Cloud Storage Bucket BigQuery API Enabled After meeting these requirements, you can manually export your data from MongoDB to BigQuery. Let’s get started! Step 1: Extract Data from MongoDB For the first step, you must extract data from your MongoDB account using the command line. To do this, you can use the mongoexport utility. Remember that mongoexport should be directly run on your system’s command-line window. An example of a command that you can give is: mongoexport --uri="mongodb+srv://username:[email protected]/database_name" --collection=collection_name --out=filename.file_format --fields="field1,field2…" Note: ‘username: password’ is your MongoDB username and password. ‘Cluster_name’ is the name of the cluster you created on your MongoDB account. It contains the database name (database_name) that contains the data you want to extract. The ‘–collection’ is the name of the table that you want to export. ‘–out=Filename.file_format’ is the file’s name and format in which you want to extract the data. For example, Comments.csv, the file with the extracted data, will be stored as a CSV file named comments. ‘– fields’ is applicable if you want to extract data in a CSV file format. After running this command, you will get a message like this displayed on your command prompt window: Connected to:mongodb+srv://[**REDACTED**]@cluster-name.gzjfolm.mongodb.net/database_name exported n records Here, n is just an example. When you run this command, it will display the number of records exported from your MongoDB collection. Step 2: Optional cleaning and transformations This is an optional step, depending on the type of data you have exported from MongoDB. When preparing data to be transferred from MongoDB to BigQuery, there are a few fundamental considerations to make in addition to any modifications necessary to satisfy your business logic. BigQuery processes UTF-8 CSV data. If your data is encoded in ISO-8859-1 (Latin-1), then you should specify that while loading it to BigQuery. BigQuery doesn’t enforce Primary key or Unique key Constraints, and the ETL (Extract, Transform, and Load) process should take care of that. Date values should be in the YYYY-MM-DD (Year-month-date) format and separated by dashes. Also, both platforms have different column types, which should be transformed for consistent and error-free data transfer. A few data types and their equivalents in BigQuery are as follows: These are just a few transformations you need to consider. Make the necessary translations before you load data to BigQuery. Step 3: Uploading data to Google Cloud Storage (GCS) After transforming your data, you must upload it to Google Cloud storage. The easiest way to do this is through your Google Cloud Web console. Login to your Google Cloud account and search for Buckets. Fill in the required fields and click Create. After creating the bucket, you will see your bucket listed with the rest. Select your bucket and click on the ‘upload files’ option. Select the file you exported from MongoDB in Step 1. Your MongoDB data is now uploaded to Google Cloud Storage. Step 4: Upload Data Extracted from MongoDB to BigQuery Table from GCS Now, from the left panel of Google Cloud, select BigQuery and select the project you are working on. Click on the three dots next to it and click ‘Create Dataset.’ Fill in all the necessary information and click the ‘Create Dataset’ button at the bottom. You have now created a dataset to store your exported data in. Now click on the three dots next to the dataset name you just created. Let’s say I created the dataset called mongo_to_bq. Select the ‘Create table’ option. Now, select the ‘Google Cloud Storage’ option and click the ‘browse’ option to select the dataset you created(mongo_to_bq). Fill in the rest of the details and click ‘Create Table’ at the bottom of the page. Now, your data has been transferred from MongoDB to BigQuery. Step 5: Verify Data Integrity After loading the data to BigQuery, it is essential to verify that the same data from MongoDB has been transferred and that no missing or corrupted data is loaded to BigQuery. To verify the data integrity, run some SQL queries in BigQuery UI and compare the records fetched as their result with your original MongoDB data to ensure correctness and completeness. Example: To find the locations of all the theaters in a dataset called “Theaters,” we can run the following query. Learn more about: MongoDB data replication Limitations of Manually Moving Data from MongoDB to BigQuery The following are some possible drawbacks when data is streamed from MongoDB to BigQuery manually: Time-Consuming: Compared to automated methods, manually exporting MongoDB data, transferring it to Cloud Storage, and then importing it into BigQuery is inefficient. Every time fresh data enters MongoDB, this laborious procedure must be repeated. Potential for human error: There is a chance that data will be wrongly exported, uploaded to the wrong place, badly converted, or loaded to the wrong table or partition if error-prone manual procedures are followed at every stage. Data lags behind MongoDB: The data in BigQuery might not be current with the most recent inserts and changes in the MongoDB database due to the manual process’s latency. Recent modifications may be overlooked in important analyses. Difficult to incrementally add new data: When opposed to automatic streaming, which manages this effectively, adding just new or modified MongoDB entries manually is difficult. Hard to reprocess historical data: It would be necessary to manually export historical data from MongoDB and reload it into BigQuery if any problems were discovered in the datasets that were previously imported. No error handling: Without automated procedures to detect, manage, and retry mistakes and incorrect data, problems like network outages, data inaccuracies, or restrictions violations may arise. Scaling limitations: MongoDB’s exporting, uploading, and loading processes don’t scale properly and become increasingly difficult as data sizes increase. The constraints drive the requirement for automated MongoDB to BigQuery replication to create more dependable, scalable, and resilient data pipelines. MongoDB to BigQuery: Use Cases Streaming data from MongoDB to BigQuery may be very helpful in the following frequent use cases: Business analytics: Analysts may use BigQuery’s quick SQL queries, sophisticated analytics features, and smooth interaction with data visualization tools like Data Studio by streaming MongoDB data into BigQuery. This can lead to greater business insights. Data warehousing: By streaming data from MongoDB and merging it with data from other sources, businesses may create a cloud data warehouse on top of BigQuery, enabling corporate reporting and dashboards. Log analysis: BigQuery’s columnar storage and massively parallel processing capabilities enable the streaming of server, application, and clickstream logs from MongoDB databases for large-scale analytics. Data integration: By streaming to BigQuery as a centralised analytics data centre, businesses using MongoDB for transactional applications may integrate and analyse data from their relational databases, customer relationship management (CRM) systems, and third-party sources. Machine Learning: Streaming data from production MongoDB databases may be utilized to train ML models using BigQuery ML’s comprehensive machine learning features. Cloud migration: By gradually streaming data, move analytics from on-premises MongoDB to Google Cloud’s analytics and storage services. Additional Read – Stream data from mongoDB Atlas to BigQuery Move Data from MongoDB to MySQL Connect MongoDB to Snowflake Move Data from MongoDB to Redshift MongoDB Atlas to BigQuery Conclusion This blog makes migrating from MongoDB to BigQuery an easy everyday task for you! The methods discussed in this blog can be applied so that business data in MongoDB and BigQuery can be integrated without any hassle through a smooth transition, with no data loss or inconsistencies. Sign up for a 14-day free trial with LIKE.TG Data to streamline your migration process and leverage multiple connectors, such as MongoDB and BigQuery, for real-time analysis! FAQ on MongoDB To BigQuery What is the difference between BigQuery and MongoDB? BigQuery is a fully managed data warehouse for large-scale data analytics using SQL. MongoDB is a NoSQL database optimized for storing unstructured data with high flexibility and scalability. How do I transfer data to BigQuery? Use tools like Google Cloud Dataflow, BigQuery Data Transfer Service, or third-party ETL tools like LIKE.TG Data for a hassle-free process. Is BigQuery SQL or NoSQL? BigQuery is an SQL database designed to run fast, complex analytical queries on large datasets. What is the difference between MongoDB and Oracle DB? MongoDB is a NoSQL database optimized for unstructured data and flexibility. Oracle DB is a relational database (RDBMS) designed for structured data, complex transactions, and strong consistency.
 How to Connect DynamoDB to S3? : 5 Easy Steps
How to Connect DynamoDB to S3? : 5 Easy Steps
Moving data from Amazon DynamoDB to S3 is one of the efficient ways to derive deeper insights from your data. If you are trying to move data into a larger database. Well, you have landed on the right article. Now, it has become easier to replicate data from DynamoDB to S3.This article will give you a brief overview of Amazon DynamoDB and Amazon S3. You will also get to know how you can set up your DynamoDB to S3 integration using 4 easy steps. Moreover, the limitations of the method will also be discussed. Read along to know more about connecting DynamoDB to S3 in the further sections. Prerequisites You will have a much easier time understanding the ways for setting up the DynamoDB to S3 integration if you have gone through the following aspects: An active AWS account.Working knowledge of the ETL process. What is Amazon DynamoDB? Amazon DynamoDB is a document and key-value Database with a millisecond response time. It is a fully managed, multi-active, multi-region, persistent Database for internet-scale applications with built-in security, in-memory cache, backup, and restore. It can handle up to 10 trillion requests per day and 20 million requests per second. Some of the top companies like Airbnb, Toyota, Samsung, Lyft, and Capital One rely on DynamoDB’s performance and scalability. Simplify Data Integration With LIKE.TG ’s No Code Data Pipeline LIKE.TG Data, an Automated No-code Data Pipeline, helps you directly transfer data from Amazon DynamoDB, S3, and 150+ other sources (50+ free sources) to Business Intelligence tools, Data Warehouses, or a destination of your choice in a completely hassle-free & automated manner. LIKE.TG ’s fully managed pipeline uses DynamoDB’s data streams to support Change Data Capture (CDC) for its tables and ingests new information via Amazon DynamoDB Streams & Amazon Kinesis Data Streams. LIKE.TG also enables you to load data from files in an S3 bucket into your Destination database or Data Warehouse seamlessly. Moreover, S3 stores its files after compressing them into a Gzip format. LIKE.TG ’s Data pipeline automatically unzips any Gzipped files on ingestion and also performs file re-ingestion in case there is any data update. Get Started with LIKE.TG for Free With LIKE.TG in place, you can automate the Data Integration process which will help in enriching the data and transforming it into an analysis-ready form without having to write a single line of code. Its fault-tolerant architecture ensures that the data is handled in a secure and flexible manner with zero data loss. LIKE.TG ’s consistent & reliable solution to manage data in real-time allows you to focus more on Data Analysis, instead of Data Consolidation. What is Amazon S3? Amazon S3 is a fully managed object storage service used for a variety of purposes like data hosting, backup and archiving, data warehousing, and much more. Through an easy-to-use control panel interface, it provides comprehensive access controls to suit any kind of organizational and commercial compliance requirements. S3 provides high availability by distributing data across multiple servers. This strategy, of course, comes with a propagation delay, however, S3 only guarantees eventual consistency. Also, in the case of Amazon S3, the API will always return either new or old data and will never provide a damaged answer. What is AWS Data Pipeline? AWS Data Pipeline is a Data Integration solution provided by Amazon. With AWS Data Pipeline, you just need to define your source and destination and AWS Data Pipeline takes care of your data movement. This will avoid your development and maintenance efforts. With the help of a Data Pipeline, you can apply pre-condition/post-condition checks, set up an alarm, schedule the pipeline, etc. This article will only focus on data transfer through the AWS Data Pipeline alone. Limitations: Per account, you can have a maximum of 100 pipelines and objects per pipeline. Steps to Connect DynamoDB to S3 using AWS Data Pipeline You can follow the below-mentioned steps to connect DynamoDB to S3 using AWS Data Pipeline: Step 1: Create an AWS Data Pipeline from the built-in template provided by Data Pipeline for data export from DynamoDB to S3 as shown in the below image. Step 2: Activate the Pipeline once done. Step 3: Once the Pipeline is finished, check whether the file is generated in the S3 bucket. Step 4: Go and download the file to see the content. Step 5: Check the content of the generated file. With this, you have successfully set up DynamoDB to S3 Integration. Advantages of exporting DynamoDB to S3 using AWS Data Pipeline AWS provides an automatic template for Dynamodb to S3 data export and very less setup is needed in the pipeline. It internally takes care of your resources i.e. EC2 instances and EMR cluster provisioning once the pipeline is activated.It provides greater resource flexibility as you can choose your instance type, EMR cluster engine, etc.This is quite handy in cases where you want to hold your baseline data or take a backup of DynamoDB table data to S3 before further testing on the DynamoDB table and can revert to the table once done with testing.Alarms and notifications can be handled beautifully using this approach. Disadvantages of exporting DynamoDB to S3 using AWS Data Pipeline The approach is a bit old-fashioned as it utilizes EC2 instances and triggers the EMR cluster to perform the export activity. If the instance and the cluster configuration are not properly provided in the pipeline, it could cost dearly. Sometimes EC2 instance or EMR cluster fails due to resource unavailability etc. This could lead to the pipeline getting failed. Even though the solutions provided by AWS work but it is not much flexible and resource optimized. These solutions either require additional AWS services or cannot be used to copy data from multiple tables across multiple regions easily. You can use LIKE.TG , an automated Data Pipeline platform for Data Integration and Replication without writing a single line of code. Using LIKE.TG , you can streamline your ETL process with its pre-built native connectors with various Databases, Data Warehouses, SaaS applications, etc. You can also check out our blog on how to move data from DynamoDB to Amazon S3 using AWS Glue. Solve your data integration problems with LIKE.TG ’s reliable, no-code, automated pipelines with 150+ connectors.Get your free trial right away! Conclusion Overall, using the AWS Data Pipeline is a costly setup, and going with serverless would be a better option. However, if you want to use engines like Hive, Pig, etc., then Pipeline would be a better option to import data from the DynamoDB table to S3. Now, the manual approach of connecting DynamoDB to S3 using AWS Glue will add complex overheads in terms of time and resources. Such a solution will require skilled engineers and regular data updates. LIKE.TG Data provides an Automated No-code Data Pipeline that empowers you to overcome the above-mentioned limitations. LIKE.TG caters to 150+ data sources (including 50+ free sources) and can seamlessly transfer your S3 and DynamoDB data to the Data Warehouse of your choice in real time. LIKE.TG ’s Data Pipeline enriches your data and manages the transfer process in a fully automated and secure manner without having to write any code. Learn more about LIKE.TG Share your experience of connecting DynamoDB to S3 in the comments section below!
 How to Integrate Salesforce to Snowflake: 3 Easy Methods
How to Integrate Salesforce to Snowflake: 3 Easy Methods
Salesforce is an important CRM system and it acts as one of the basic source systems to integrate while building a Data Warehouse or a system for Analytics. Snowflake is a Software as a Service (SaaS) that provides Data Warehouse on Cloud-ready to use and has enough connectivity options to connect any reporting suite using JDBC or provided libraries.This article uses APIs, UNIX commands or tools, and Snowflake’s web client that will be used to set up this data ingestion from Salesforce to Snowflake. It also focuses on high volume data and performance and these steps can be used to load millions of records from Salesforce to Snowflake. What is Salesforce Image Source Salesforce is a leading Cloud-based CRM platform. As a Platform as a Service (Paas), Salesforce is known for its CRM applications for Sales, Marketing, Service, Community, Analytics etc. It also is highly Scalable and Flexible. As Salesforce contains CRM data including Sales, it is one of the important sources for Data Ingestion into Analytical tools or Databases like Snowflake. What is Snowflake Image Source Snowflake is a fully relational ANSI SQL Data Warehouse provided as a Software-as-a-Service (SaaS). It provides a Cloud Data Warehouse ready to use, with Zero Management or Administration. It uses Cloud-based persistent Storage and Virtual Compute instances for computation purposes. Key features of Snowflake include Time Travel, Fail-Safe, Web-based GUI client for administration and querying, SnowSQL, and an extensive set of connectors or drivers for major programming languages. Methods to move data from Salesforce to Snowflake Method 1: Easily Move Data from Salesforce to Snowflake using LIKE.TG Method 2: Move Data From Salesforce to Snowflake using Bulk API Method 3: Load Data from Salesforce to Snowflake using Snowflake Output Connection (Beta) Method 1: Easily Move Data from Salesforce to Snowflake using LIKE.TG LIKE.TG is fully managed and completely automates the process of not only loading data from your desired source but also enriching the data and transforming it into an analysis-ready form without having to write a single line of code. Its fault-tolerant architecture ensures that the data is handled in a secure, consistent manner with zero data loss. It is that simple. While you relax, LIKE.TG will take care of fetching the data from data sources like Salesforce, etc., and sending it to your destination warehouse for free. Get started for Free with LIKE.TG ! Here are the steps involved in moving the data from Salesforce to Snowflake: Step 1: Configure your Salesforce Source Authenticate and configure your Salesforce data source as shown in the below image. To learn more about this step, visit here. In the Configure Salesforce as Source Page, you can enter details such as your pipeline name, authorized user account, etc. In the Historical Sync Duration, enter the duration for which you want to ingest the existing data from the Source. By default, it ingests the data for 3 months. You can select All Available Data, enabling you to ingest data since January 01, 1970, in your Salesforce account. Step 2: Configure Snowflake Destination Configure the Snowflake destination by providing the details like Destination Name, Account Name, Account Region, Database User, Database Password, Database Schema, and Database Name to move data from Salesforce to Snowflake. In addition to this, LIKE.TG lets you bring data from 150+ Data Sources (40+ free sources) such as Cloud Apps, Databases, SDKs, and more. You can explore the complete list here. LIKE.TG will now take care of all the heavy-weight lifting to move data from Salesforce to Snowflake. Here are some of the benefits of LIKE.TG : In-built Transformations – Format your data on the fly with LIKE.TG ’s preload transformations using either the drag-and-drop interface, or our nifty python interface. Generate analysis-ready data in your warehouse using LIKE.TG ’s Postload Transformation Near Real-Time Replication – Get access to near real-time replication for all database sources with log based replication. For SaaS applications, near real time replication is subject to API limits. Auto-Schema Management – Correcting improper schema after the data is loaded into your warehouse is challenging. LIKE.TG automatically maps source schema with destination warehouse so that you don’t face the pain of schema errors. Transparent Pricing – Say goodbye to complex and hidden pricing models. LIKE.TG ’s Transparent Pricing brings complete visibility to your ELT spend. Choose a plan based on your business needs. Stay in control with spend alerts and configurable credit limits for unforeseen spikes in data flow. Security – Discover peace with end-to-end encryption and compliance with all major security certifications including HIPAA, GDPR, SOC-2. Get started for Free with LIKE.TG ! Method 2: Move Data From Salesforce to Snowflake using Bulk API What is Salesforce DATA APIs As, we will be loading data from Salesforce to Snowflake, extracting data out from Salesforce is the initial step. Salesforce provides various general-purpose APIs that can be used to access Salesforce data, general-purpose APIs provided by Salesforce: REST API SOAP API Bulk API Streaming API Along with these Salesforce provides various other specific purpose APIs such as Apex API, Chatter API, Metadata API, etc. which are beyond the scope of this post. The following section gives a high-level overview of general-purpose APIs: Synchronous API: Synchronous request blocks the application/client until the operation is completed and a response is received. Asynchronous API: An Asynchronous API request doesn’t block the application/client making the request. In Salesforce this API type can be used to process/query a large amount of data, as Salesforce processes the batches/jobs at the background in Asynchronous calls. Understanding the difference between Salesforce APIs is important, as depending on the use case we can choose the best of the available options for loading data from Salesforce to Snowflake. APIs will be enabled by default for the Salesforce Enterprise edition, if not we can create a developer account and get the token required to access API. In this post, we will be using Bulk API to access and load the data from Salesforce to Snowflake. The process flow for querying salesforce data using Bulk API: The steps are given below, each one of them explained in detail to get data from Salesforce to Snowflake using Bulk API on a Unix-based machine. Step 1: Log in to Salesforce API Bulk API uses SOAP API for login as Bulk API doesn’t provide login operation. Save the below XML as login.xml, and replace username and password with your respective salesforce account username and password, which will be a concatenation of the account password and access token. <?xml version="1.0" encoding="utf-8" ?> <env:Envelope xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:env="http://schemas.xmlsoap.org/soap/envelope/"> <env:Body> <n1:login xmlns:n1="urn:partner.soap.sforce.com"> <n1:username>username</n1:username> <n1:password>password</n1:password> </n1:login> </env:Body> </env:Envelope> Using a Terminal, execute the following command: curl <URL> -H "Content-Type: text/xml; charset=UTF-8" -H "SOAPAction: login" -d @login.xml > login_response.xml Above command if executed successfully will return an XML loginResponse with <sessionId> and <serverUrl> which will be used in subsequent API calls to download data. login_response.xml will look as shown below: <?xml version="1.0" encoding="UTF-8"?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns="urn:partner.soap.sforce.com" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <soapenv:Body> <loginResponse> <result> <metadataServerUrl><URL> <passwordExpired>false</passwordExpired> <sandbox>false</sandbox> <serverUrl><URL> <sessionId>00Dj00001234ABCD5!AQcAQBgaabcded12XS7C6i3FNE0TMf6EBwOasndsT4O</sessionId> <userId>0010a00000ABCDefgh</userId> <userInfo> <currencySymbol>$</currencySymbol> <organizationId>00XYZABCDEF123</organizationId> <organizationName>ABCDEFGH</organizationName> <sessionSecondsValid>43200</sessionSecondsValid> <userDefaultCurrencyIsoCode xsi:nil="true"/> <userEmail>user@organization</userEmail> <userFullName>USERNAME</userFullName> <userLanguage>en_US</userLanguage> <userName>user@organization</userName> <userTimeZone>America/Los_Angeles</userTimeZone> </userInfo> </result> </loginResponse> </soapenv:Body> </soapenv:Envelope> Using the above XML, we need to initialize three variables: serverUrl, sessionId, and instance. The first two variables are available in the response XML, the instance is the first part of the hostname in serverUrl. The shell script snippet given below can extract these three variables from the login_response.xml file: sessionId=$(xmllint --xpath "/*[name()='soapenv:Envelope']/*[name()='soapenv:Body']/*[name()='loginResponse']/* [name()='result']/*[name()='sessionId']/text()" login_response.xml) serverUrl=$(xmllint --xpath "/*[name()='soapenv:Envelope']/*[name()='soapenv:Body']/*[name()='loginResponse']/* [name()='result']/*[name()='serverUrl']/text()" login_response.xml) instance=$(echo ${serverUrl/.salesforce.com*/} | sed 's|https(colon)//||') sessionId = 00Dj00001234ABCD5!AQcAQBgaabcded12XS7C6i3FNE0TMf6EBwOasndsT4O serverUrl = <URL> instance = organization Step 2: Create a Job Save the given below XML as job_account.xml. The XML given below is used to download Account object data from Salesforce in JSON format. Edit the bold text to download different objects or to change content type as per the requirement i.e. to CSV or XML. We are using JSON here. job_account.xml: <?xml version="1.0" encoding="UTF-8"?> <jobInfo xmlns="http://www.force.com/2009/06/asyncapi/dataload"> <operation>query</operation> <object>Account</object> <concurrencyMode>Parallel</concurrencyMode> <contentType>JSON</contentType> </jobInfo> Execute the command given below to create the job and get the response, from the XML response received (account_jobresponse.xml), we will extract the jobId variable. curl -s -H "X-SFDC-Session: ${sessionId}" -H "Content-Type: application/xml; charset=UTF-8" -d @job_account.xml https://${instance}.salesforce.com/services/async/41.0/job > account_job_response.xml jobId = $(xmllint --xpath "/*[name()='jobInfo']/*[name()='id']/text()" account_job_response.xml) account_job_response.xml: <?xml version="1.0" encoding="UTF-8"?> <jobInfo xmlns="http://www.force.com/2009/06/asyncapi/dataload"> <id>1200a000001aABCD1</id> <operation>query</operation> <object>Account</object> <createdById>00580000003KrL0AAK</createdById> <createdDate>2018-05-22T06:09:45.000Z</createdDate> <systemModstamp>2018-05-22T06:09:45.000Z</systemModstamp> <state>Open</state> <concurrencyMode>Parallel</concurrencyMode> <contentType>JSON</contentType> <numberBatchesQueued>0</numberBatchesQueued> <numberBatchesInProgress>0</numberBatchesInProgress> <numberBatchesCompleted>0</numberBatchesCompleted> <numberBatchesFailed>0</numberBatchesFailed> <numberBatchesTotal>0</numberBatchesTotal> <numberRecordsProcessed>0</numberRecordsProcessed> <numberRetries>0</numberRetries> <apiVersion>41.0</apiVersion> <numberRecordsFailed>0</numberRecordsFailed> <totalProcessingTime>0</totalProcessingTime> <apiActiveProcessingTime>0</apiActiveProcessingTime> <apexProcessingTime>0</apexProcessingTime> </jobInfo> jobId = 1200a000001aABCD1 Step 3: Add a Batch to the Job The next step is to add a batch to the Job created in the previous step. A batch contains a SQL query used to get the data from SFDC. After submitting the batch, we will extract the batchId from the JSON response received. uery = ‘select ID,NAME,PARENTID,PHONE,ACCOUNT_STATUS from ACCOUNT’ curl -d "${query}" -H "X-SFDC-Session: ${sessionId}" -H "Content-Type: application/json; charset=UTF-8" https://${instance}.salesforce.com/services/async/41.0/job/${jobId}/batch | python -m json.tool > account_batch_response.json batchId = $(grep "id": $work_dir/job_responses/account_batch_response.json | awk -F':' '{print $2}' | tr -d ' ,"') account_batch_response.json: { "apexProcessingTime": 0, "apiActiveProcessingTime": 0, "createdDate": "2018-11-30T06:52:22.000+0000", "id": "1230a00000A1zABCDE", "jobId": "1200a000001aABCD1", "numberRecordsFailed": 0, "numberRecordsProcessed": 0, "state": "Queued", "stateMessage": null, "systemModstamp": "2018-11-30T06:52:22.000+0000", "totalProcessingTime": 0 } batchId = 1230a00000A1zABCDE Step 4: Check The Batch Status As Bulk API is an Asynchronous API, the batch will be run at the Salesforce end and the state will be changed to Completed or Failed once the results are ready to download. We need to repeatedly check for the batch status until the status changes either to Completed or Failed. status="" while [ ! "$status" == "Completed" || ! "$status" == "Failed" ] do sleep 10; #check status every 10 seconds curl -H "X-SFDC-Session: ${sessionId}" https://${instance}.salesforce.com/services/async/41.0/job/${jobId}/batch/${batchId} | python -m json.tool > account_batchstatus_response.json status=$(grep -i '"state":' account_batchstatus_response.json | awk -F':' '{print $2}' | tr -d ' ,"') done; account_batchstatus_response.json: { "apexProcessingTime": 0, "apiActiveProcessingTime": 0, "createdDate": "2018-11-30T06:52:22.000+0000", "id": "7510a00000J6zNEAAZ", "jobId": "7500a00000Igq5YAAR", "numberRecordsFailed": 0, "numberRecordsProcessed": 33917, "state": "Completed", "stateMessage": null, "systemModstamp": "2018-11-30T06:52:53.000+0000", "totalProcessingTime": 0 } Step 5: Retrieve the Results Once the state is updated to Completed, we can download the result dataset which will be in JSON format. The code snippet given below will extract the resultId from the JSON response and then will download the data using the resultId. if [ "$status" == "Completed" ]; then curl -H "X-SFDC-Session: ${sessionId}" https(colon)//${instance}.salesforce(dot)com/services/async/41.0/job/${jobId}/batch/${batchId}/result | python -m json.tool > account_result_response.json resultId = $(grep '"' account_result_response.json | tr -d ' ,"') curl -H "X-SFDC-Session: ${sessionId}" https(colon)//${instance}.salesforce(dot)com/services/async/41.0/job/${jobId}/batch/${batchId}/result/ ${resultId} > account.json fi account_result_response.json: [ "7110x000008jb3a" ] resultId = 7110x000008jb3a Step 6: Close the Job Once the results have been retrieved, we can close the Job. Save below XML as close-job.xml. <?xml version="1.0" encoding="UTF-8"?> <jobInfo xmlns="http://www.force.com/2009/06/asyncapi/dataload"> <state>Closed</state> </jobInfo> Use the code given below to close the job, by suffixing the jobId to the close-job request URL. curl -s -H "X-SFDC-Session: ${sessionId}" -H "Content-Type: text/csv; charset=UTF-8" -d @close-job.xml https(colon)//${instance}.salesforce(dot)com/services/async/41.0/job/${jobId} After running all the above steps, we will have the account.json generated in the current working directory, which contains the account data downloaded from Salesforce in JSON format, which we will use to load data into Snowflake in next steps. Downloaded data file: $ cat ./account.json [ { "attributes" : { "type" : "Account", "url" : "/services/data/v41.0/sobjects/Account/2x234abcdedg5j" }, "Id": "2x234abcdedg5j", "Name": "Some User", "ParentId": "2x234abcdedgha", "Phone": 124567890, "Account_Status": "Active" }, { "attributes" : { "type" : "Account", "url" : "/services/data/v41.0/sobjects/Account/1x234abcdedg5j" }, "Id": "1x234abcdedg5j", "Name": "Some OtherUser", "ParentId": "1x234abcdedgha", "Phone": null, "Account_Status": "Active" } ] Step 7: Loading Data from Salesforce to Snowflake Now that we have the JSON file downloaded from Salesforce, we can use it to load the data into a Snowflake table. File extracted from Salesforce has to be uploaded to Snowflake’s internal stage or to an external stage such as Microsoft Azure or AWS S3 location. Then we can load the Snowflake table using the created Snowflake Stage. Step 8: Creating a Snowflake Stage Stage in the Snowflake is a location where data files are stored, and that location is accessible by Snowflake; then, we can use the Stage name to access the file in Snowflake or to load the table. We can create a new stage, by following below steps: Login to the Snowflake Web Client UI. Select the desired Database from the Databases tab. Click on Stages tab Click Create, Select desired location (Internal, Azure or S3) Click Next Fill the form that appears in the next window (given below). Fill the details i.e. Stage name, Stage schema of Snowflake, Bucket URL and the required access keys to access the Stage location such as AWS keys to access AWS S3 bucket. Click Finish. Step 9: Creating Snowflake File Format Once the stage is created, we are all set with the file location. The next step is to create the file format in Snowflake. File Format menu can be used to create the named file format, which can be used for bulk loading data into Snowflake using that file format. As we have JSON format for the extracted Salesforce file, we will create the file format to read a JSON file. Steps to create File Format: Login to Snowflake Web Client UI. Select the Databases tab. Click the File Formats tab. Click Create. This will open a new window where we can mention the file format properties. We have selected type as JSON, Schema as Format which stores all our File Formats. Also, we have selected Strip Outer Array option, this is required to strip the outer array (square brace that encloses entire JSON) that Salesforce adds to the JSON file. File Format can also be created using SQL in Snowflake. Also, grants have to be given to allow other roles to access this format or stage we have created. create or replace file format format.JSON_STRIP_OUTER type = 'json' field_delimiter = none record_delimiter = ' ' STRIP_OUTER_ARRAY = TRUE; grant USAGE on FILE FORMAT FORMAT.JSON_STRIP_OUTER to role developer_role; Step 10: Loading Salesforce JSON Data to Snowflake Table Now that we have created the required Stage and File Format of Snowflake, we can use them to bulk load the generated Salesforce JSON file and load data into Snowflake. The advantage of JSON type in Snowflake:Snowflake can access the semi-structured type like JSON or XML as a schemaless object and can directly query/parse the required fields without loading them to a staging table. To know more about accessing semi-structured data in Snowflake, click here. Step 11: Parsing JSON File in Snowflake Using the PARSE_JSON function we can interpret the JSON in Snowflake, we can write a query as given below to parse the JSON file into a tabular format. Explicit type casting is required when using parse_json as it’ll always default to string. SELECT parse_json($1):Id::string, parse_json($1):Name::string, parse_json($1):ParentId::string, parse_json($1):Phone::int, parse_json($1):Account_Status::string from @STAGE.salesforce_stage/account.json ( file_format=>('format.JSON_STRIP_OUTER')) t; We will create a table in snowflake and use the above query to insert data into it. We are using Snowflake’s web client UI for running these queries. Upload file to S3: Table creation and insert query: Data inserted into the Snowflake target table: Hurray!! You have successfully loaded data from Salesforce to Snowflake. Limitations of Loading Data from Salesforce to Snowflake using Bulk API The maximum single file size is 1GB (Data that is more than 1GB, will be broken into multiple parts while retrieving results). Bulk API query doesn’t support the following in SOQL query:COUNT, ROLLUP, SUM, GROUP BY CUBE, OFFSET, and Nested SOQL queries. Bulk API doesn’t support base64 data type fields. Method 3: Load Data from Salesforce to Snowflake using Snowflake Output Connection (Beta) In June 2020, Snowflake and Salesforce launched native integration so that customers can move data from Salesforce to Snowflake. This can be analyzed using Salesforce’s Einstein Analytics or Tableau. This integration is available in open beta for Einstein Analytics customers. Steps for Salesforce to Snowflake Integration Enable the Snowflake Output Connector Create the Output Connection Configure the Connection Settings Limitations of Loading Data from Salesforce to Snowflake using Snowflake Output Connection (Beta): Snowflake Output Connection (Beta) is not a full ETL solution. It extracts and loads data but lacks the capacity for complex transformations. It has limited scalability as there are limitations on the amount of data that can be transferred per object per hour. So, using Snowflake Output Connection as Salesforce to Snowflake connector is not very efficient. Use Cases of Salesforce to Snowflake Integration Real-Time Forecasting: When you connect Salesforce to Snowflake, it can be used in business for predicting end-of-the-month/ quarter/year forecasts that help in better decision-making. For example, you can use opportunity data from Salesforce with ERP and finance data from Snowflake to do so. Performance Analytics: After you import data from Salesforce to Snowflake, you can analyze your marketing campaign’s performance. You can analyze conversion rates by merging click data from Salesforce with the finance data in Snowflake. AI and Machine Learning: It can be used in business organizations to determine customer purchases of specific products. This can be done by combining Salesforce’s objects, such as website visits, with Snowflake’s POS and product category data. Conclusion This blog has covered all the steps required to extract data using Bulk API to move data from Salesforce to Snowflake. Additionally, an easier alternative using LIKE.TG has also been discussed to load data from Salesforce to Snowflake. Visit our Website to Explore LIKE.TG Want to take LIKE.TG for a spin? Sign Up for a 14-day free trial and experience the feature-rich LIKE.TG suite first hand. You can also have a look at the unbeatable pricing that will help you choose the right plan for your business needs. Do leave a comment on your experience of replicating data from Salesforce to Snowflake and let us know what worked for you.
 How to load data from Facebook Ads to Google BigQuery
How to load data from Facebook Ads to Google BigQuery
Leveraging the data from Facebook Ads Insights offers businesses a great way to measure their target audiences. However, transferring massive amounts of Facebook ad data to Google BigQuery is no easy feat. If you want to do just that, you’re in luck. In this article, we’ll be looking at how you can migrate data from Facebook Ads to BigQuery. Understanding the Methods to Connect Facebook Ads to BigQuery Load Data from Facebook Ads to BigQueryGet a DemoTry itLoad Data from Google Analytics to BigQueryGet a DemoTry itLoad Data from Google Ads to BigQueryGet a DemoTry it These are the methods you can use to move data from Facebook Ads to BigQuery: Method 1: Using LIKE.TG to Move Data from Facebook Ads to BigQuery Method 2: Writing Custom Scripts to Move Data from Facebook Ads to BigQuery Method 3: Manual Upload of Data from Facebook Ads to BigQuery Method 1: Using LIKE.TG to Move Data from Facebook Ads to BigQuery LIKE.TG is the only real-time ELT No-code Data Pipeline platform that cost-effectively automates data pipelines that are flexible to your needs. With integration with 150+ Data Sources (40+ free sources), we help you not only export data from sources & load data to the destinations but also transform & enrich your data, & make it analysis-ready. Get Started with LIKE.TG for Free LIKE.TG can help you load data in two simple steps: Step 1: Connect Facebook Ads Account as Source Follow the below steps to set up Facebook Ads Account as source: In the Navigation Bar, Click PIPELINES. Click + CREATE in the Pipelines List View. From the Select Source Type page, select Facebook Ads. In the Configure your Facebook Ads account page, you can do one of the following: Select a previously configured account and click CONTINUE. Click Add Facebook Ads Account and follow the below steps to configure an account: Log in to your Facebook account, and in the pop-up dialog, click Continue as <Company Name> Click Save to authorize LIKE.TG to access your Facebook Ads and related statistics. Click Got it in the confirmation dialog. Configure your Facebook Ads as a source by providing the Pipeline Name, authorized account, report type, aggregation level, aggregation time, breakdowns, historical sync duration, and key fields. Step 2: Configure Google BigQuery as your Destination Click DESTINATIONS in the Navigation Bar. In the Destinations List View, Click + CREATE. Select Google BigQuery as the Destination type in the Add Destination page. Connect to your BigQuery account and start moving your data from Facebook Ads to BigQuery by providing the project ID, dataset ID, Data Warehouse name, GCS bucket. Simplify your data analysis with LIKE.TG today and Sign up here for a 14-day free trial!. Method 2: Writing Custom Scripts to Move Data from Facebook Ads to BigQuery Migrating data from Facebook Ads Insights to Google BigQuery essentially involves two key steps: Step 1: Pulling Data from Facebook Step 2: Loading Data into BigQuery Step 1: Pulling Data from Facebook Put simply, pulling data from Facebook involves downloading the relevant Ads Insights data, which can be used for a variety of business purposes. Currently, there are two main methods for users to pull data from Facebook: Through APIs. Through Real-time streams. Method 1: Through APIs Users can access Facebook’s APIs through the different SDKs offered by the platform. While Python and PHP are the main languages supported by Facebook, it’s easy to find community-supported SDKs for languages such as JavaScript, R, and Ruby. What’s more, the Facebook Marketing API is relatively easy to use – which is why it can be harnessed to execute requests that direct to specific endpoints. Also, since the Facebook Marketing API is a RESTful API, you can interact with it via your favorite framework or language. Like everything else Facebook-related, Ads and statistics data form part of and can be acquired through the Graph API, and any requests for statistics specific to particular ads can be sent to Facebook Insights. In turn, Insights will reply to such requests with more information on the queried ad object. If the above seems overwhelming, there’s no need to worry and we’ll be taking a look at an example to help simplify things. Suppose you want to extract all stats relevant to your account. This can be done by executing the following simple request through curl: curl -F 'level=campaign' -F 'fields=[]' -F 'access_token=<ACCESS_TOKEN>' https://graph.facebook.com/v2.5/<CAMPAIGN_ID>/insights curl -G -d 'access_token=<ACCESS_TOKEN>' https://graph.facebook.com/v2.5/1000002 curl -G -d 'access_token=<ACCESS_TOKEN>' https://graph.facebook.com/v2.5/1000002/insights Once it’s ready, the data you’ve requested will then be returned in either CSV or XLS format and be able to access it via a URL such as the one below: https://www.facebook.com/ads/ads_insights/export_report?report_run_id=<REPORT_ID> &format=<REPORT_FORMAT>&access_token=<ACCESS_TOKEN Method 2: Through Real-time Streams You can also pull data from Facebook by creating a real-time data substructure and can even load your data into the data warehouse. All you need to do to achieve all this and to receive API updates is to subscribe to real-time updates. Using the right substructure, you’ll be able to stream an almost real-time data feed to your database, and by doing so, you’ll be kept up-to-date with the latest data. Facebook Ads boasts a tremendously rich API that offers users the opportunity to extract even the smallest portions of data regarding accounts and target audience activities. More importantly, however, is that all of this real-time data can be used for analytics and reporting purposes. However, there’s a minor consideration that needs to be mentioned. It’s no secret that these resources become more complex as they continue to grow, meaning you’ll need a complex protocol to handle them and it’s worth keeping this in mind as the volume of your data grows with each passing day. Moving on, the data that you pull from Facebook can be in one of a plethora of different formats, yet BigQuery isn’t compatible with all of them. This means that it’s in your best interest to convert data into a format supported by BigQuery after you’ve pulled it from Facebook. For example, if you pull XML data, then you’ll need to convert it into any of the following data formats: CSV JSON. You should also make sure that BigQuery supports the BigQuery data types you’re using. BigQuery currently supports the following data types: STRING INTEGER FLOAT BOOLEAN RECORD TIMESTAMP Please refer to Google’s documentation on preparing data for BigQuery, to learn more. Now that you’ve understood the different data formats and types supported by BigQuery, it’s time to learn how to pull data from Facebook. Step 2: Loading Data Into BigQuery If you opt to use Google Cloud Storage to load data from Facebook Ads into BigQuery, then you’ll need to first load the data into Google Cloud Storage. This can be done in one of a few ways. First and foremost, this can be done directly through the console. Alternatively, you can post data with the help of the JSON API. One thing to note here is that APIs play a crucial role, both in pulling data from Facebook Ads and loading data into Bigquery. Perhaps the simplest way to load data into BigQuery is by requesting HTTP POST using tools such as curl. Should you decide to go this route, your POST request should look something like this: POST /upload/storage/v1/b/myBucket/o?uploadType=media&name= TEST HTTP/1.1 Host: www.googleapis.com Content-Type: application/text Content-Length: number_of_bytes_in_file Authorization: Bearer your_auth_token your Facebook Ads data And if you enter everything correctly you’ll get a response that looks like this: HTTP/1.1 200 Content-Type: application/json { "name": "TEST" } However, remember that tools like curl are only useful for testing purposes. So, you’ll need to write specific codes to send data to Google if you want to automate the data loading process. This can be done in one of the following languages when using the Google App Engine to write codes: Python Java PHP Go Apart from coding for the Google App Engine, the above languages can even be used to access Google Cloud Storage. Once you’ve imported your extracted data into Google Cloud Storage, you’ll need to create and run a LoadJob, which directs to the data that needs to be imported from the cloud and will ultimately load the data into BigQuery. This works by specifying source URLs that point to the queried objects. This method makes use of POST requests for storing data in the Google Cloud Storage API, from where it will load the data into BigQuery. Another method to accomplish this is by posting a direct HTTP POST request to BigQuery with the data you’d like to query. While this method is very similar to loading data through the JSON API, it differs by using specific BigQuery end-points to load data directly. Furthermore, the interaction is quite simple and can be carried out via either the framework or the HTTP client library of your preferred language. Limitations of using Custom Scripts to Connect Facebook Ads to BigQuery Building a custom code for transfer data from Facebook Ads to Google BigQuery may appear to be a practically sound arrangement. However, this approach comes with some limitations too. Code Maintenance: Since you are building the code yourself, you would need to monitor and maintain it too. On the off chance that Facebook refreshes its API or the API sends a field with a datatype which your code doesn’t understand, you would need to have resources that can handle these ad-hoc requests. Data Consistency: You additionally will need to set up a data validation system in place to ensure that there is no data leakage in the infrastructure. Real-time Data: The above approach can help you move data one time from Facebook Ads to BigQuery. If you are looking to analyze data in real-time, you will need to deploy additional code on top of this. Data Transformation Capabilities: Often, there will arise a need for you to transform the data received from Facebook before analyzing it. Eg: When running ads across different geographies globally, you will want to convert the timezones and currencies from your raw data and bring them to a standard format. This would require extra effort. Utilizing a Data Integration stage like LIKE.TG frees you of the above constraints. Method 3: Manual Upload of Data from Facebook Ads to BigQuery This is an affordable solution for moving data from Facebook Ads to BigQuery. These are the steps that you can carry out to load data from Facebook Ads to BigQuery manually: Step 1: Create a Google Cloud project, after which you will be taken to a “Basic Checklist”. Next, navigate to Google BigQuery and look for your new project. Step 2: Log In to Facebook Ads Manager and navigate to the data you wish to query in Google BigQuery. If you need daily data, you need to segment your reports by day. Step 3: Download the data by selecting “Reports” and then click on “Export Table Data”. Export your data as a .csv file and save it on your PC. Step 4: Navigate back to Google BigQuery and ensure that your project is selected at the top of the screen. Click on your project ID in the left-hand navigation and click on “+ Create Dataset” Step 5: Provide a name for your dataset and ensure that an encryption method is set. Click on “Create Dataset” followed by clicking on the name of your new dataset in the left-hand navigation. Next, click on “Create Table” to finish this step. Step 6: Go to the source section, then create your table from the Upload option. Find your Facebook Ads report that you saved to your PC and choose file format as CSV. In the destination section, select “Search for a project”. Next, find your project name from the dropdown list. Select your dataset name and the name of the table. Step 7: Go to the schema section and click on the checkbox to allow BigQuery to either auto-detect a schema or click on “Edit as Text” to manually name schema, set mode, and type. Step 8: Go to the Partition and Cluster Settings section and choose “Partition by Ingestion Time” or “No partitioning” based on your needs. Partitioning splits your table into smaller segments that allow smaller sections of data to be queried quickly. Next, navigate to Advanced options and set the field delimiter like a comma. Step 9: Click “Create table”. Your Data Warehouse will begin to populate with Facebook Ads data. You can check your Job History for the status of your data load. Navigate to Google BigQuery and click on your dataset ID. Step 10: You can write SQL queries against your Facebook data in Google BigQuery, or export your data to Google Data Studio along with other third-party tools for further analysis. You can repeat this process for all additional Facebook data sets you wish to upload and ensure fresh data availability. Limitations of Manual Upload of Data from Facebook Ads to BigQuery Data Extraction: Downloading data from Facebook Ads manually for large-scale data is a daunting and time-consuming task. Data Uploads: A manual process of uploading will need to be watched and involved in continuously. Human Error: In a manual process, errors such as mistakes in data entry, omitted uploads, and duplication of records can take place. Data Integrity: There is no automated assurance mechanism to ensure that integrity and consistency of the data. Delays: Manual uploads run the risk of creating delays in availability and the real integration of data for analysis. Benefits of sending data from Facebook Ads to Google BigQuery Identify patterns with SQL queries: To gain deeper insights into your ad performance, you can use advanced SQL queries. This helps you to analyze data from multiple angles, spot patterns, and understand metric correlations. Conduct multi-channel ad analysis: You can integrate your Facebook Ads data with metrics from other sources like Google Ads, Google Analytics 4, CRM, or email marketing apps. By doing this, you can analyze your overall marketing performance and understand how different channels work together. Analyze ad performance in-depth: You can carry out a time series analysis to identify changes in ad performance over time and understand how factors like seasonality impact ad performance. Leverage ML algorithms: You can also build ML models and train them to forecast future performance, identify which factors drive ad success, and optimize your campaigns accordingly. Data Visualization: ​​Build powerful interactive dashboards by connecting BigQuery to PowerBI, Looker Studio (former Google Data Studio), or another data visualization tool. This enables you to create custom dashboards that showcase your key metrics, highlight trends, and provide actionable insights to drive better marketing decisions. Use Cases of Loading Facebook Ads to BigQuery Marketing Campaigns: Analyzing facebook ads audience data in bigquery can help you to enhance the performance of your marketing campaigns. Advertisement data from Facebook combined with business data in BigQuery can give better insights for decision-making. Personalized Audience Targeting: On Facebook ads conversion data in BigQuery, you can utilize BigQuery’s powerful querying capabilities to segment audiences based on detailed demographics, interests, and behaviors extracted from Facebook Ads data. Competitive Analysis: You can compare your Facebook attribution data in BigQuery to understand the Ads performance of industry competitors using publicly available data sources. Get Real-time Streams of Your Facebook Ad Statistics You can easily create a real-time data infrastructure for extracting data from Facebook Ads and loading them into a Data Warehouse repository. You can achieve this by subscribing to real-time updates to receive API updates with Webhooks. Armed with the proper infrastructure, you can have an almost real-time data feed into your repository and ensure that it will always be up to date with the latest bit of data. Facebook Ads is a real-time bidding system where advertisers can compete to showcase their advertising material. Facebook Ads imparts a very rich API that gives you the opportunity to get extremely granular data regarding your accounting activities and leverage it for reporting and analytic purposes. This richness will cost you, though many complex resources must be tackled with an equally intricate protocol. Prepare Your Facebook Ads Data for Google BigQuery Before diving into the methods that can be deployed to set up a connection from Facebook Ads to BigQuery, you should ensure that it is furnished in an appropriate format. For instance, if the API you pull data from returns an XML file, you would first have to transform it to a serialization that can be understood by BigQuery. As of now, the following two data formats are supported: JSON CSV Apart from this, you also need to ensure that the data types you leverage are the ones supported by Google BigQuery, which are as follows: FLOAT RECORD TIMESTAMP INTEGER FLOAT STRING Additional Resources on Facebook Ads To Bigquery Explore how to Load Data into Bigquery Conclusion This blog talks about the 3 different methods you can use to move data from Facebook Ads to BigQuery in a seamless fashion. It also provides information on the limitations of using the manual methods and use cases of integrating Facebook ads data to BigQuery. FAQ about Facebook Ads to Google BigQuery How do I get Facebook data into BigQuery? To get Facebook data into BigQuery you can use one of the following methods:1. Use ETL Tools2. Google Cloud Data Transfer Service3. Run Custom Scripts4. Manual CSV Upload How do I integrate Google Ads to BigQuery? Google Ads has a built-in connector in BigQuery. To use it, go to your BigQuery console, find the data transfer service, and set up a new transfer from Google Ads. How to extract data from Facebook ads? To extract data from Facebook ads, you can use the Facebook Ads API or third-party ETL tools like LIKE.TG Data. Do you have any experience in working with moving data from Facebook Ads to BigQuery? Let us know in the comments section below.
 How to load data from MySQL to Snowflake using 2 Easy Methods
How to load data from MySQL to Snowflake using 2 Easy Methods
Relational databases, such as MySQL, have traditionally helped enterprises manage and analyze massive volumes of data effectively. However, as scalability, real-time analytics, and seamless data integration become increasingly important, contemporary data systems like Snowflake have become strong substitutes. After experimenting with a few different approaches and learning from my failures, I’m excited to share my tried-and-true techniques for moving data from MySQL to Snowflake.In this blog, I’ll walk you through two simple migration techniques: manual and automated. I will also share the factors to consider while choosing the right approach. Select the approach that best meets your needs, and let’s get going! What is MySQL? MySQL is an open-source relational database management system (RDBMS) that allows users to access and manipulate databases using Structured Query Language (SQL). Created in the middle of the 1990s, MySQL’s stability, dependability, and user-friendliness have made it one of the most widely used databases worldwide. Its structured storage feature makes it ideal for organizations that require high-level data integrity, consistency, and reliability. Some significant organizations that use MySQL include Amazon, Uber, Airbnb, and Shopify. Key Features of MySQL : Free to Use: MySQL is open-source, so that you can download, install, and use it without any licensing costs. This allows you to use all the functionalities a robust database management system provides without many barriers. However, for large organizations, it also offers commercial versions like MySQL Cluster Carrier Grade Edition and MySQL Enterprise Edition. Scalability: Suitable for both small and large-scale applications. What is Snowflake? Snowflake is a cloud-based data warehousing platform designed for high performance and scalability. Unlike traditional databases, Snowflake is built on a cloud-native architecture, providing robust data storage, processing, and analytics capabilities. Key Features of Snowflake : Cloud-Native Architecture: Fully managed service that runs on cloud platforms like AWS, Azure, and Google Cloud. Scalability and Elasticity: Automatically scales compute resources to handle varying workloads without manual intervention. Why move MySQL data to Snowflake? Performance and Scalability: MySQL may experience issues managing massive amounts of data and numerous user queries simultaneously as data quantity increases. Snowflake’s cloud-native architecture, which offers nearly limitless scalability and great performance, allows you to handle large datasets and intricate queries effectively. Higher Level Analytics: Snowflake offers advanced analytical features like data science and machine learning workflow assistance. These features can give you deeper insights and promote data-driven decision-making. Economy of Cost: Because Snowflake separates computation and storage resources, you can optimize your expenses by only paying for what you utilize. The pay-as-you-go approach is more economical than the upkeep and expansion of MySQL servers situated on-site. Data Integration and Sharing: Snowflake’s powerful data-sharing features make integrating and securely exchanging data easier across departments and external partners. This skill is valuable for firms seeking to establish a cohesive data environment. Streamlined Upkeep: Snowflake removes the need for database administration duties, which include software patching, hardware provisioning, and backups. It is a fully managed service that enables you to concentrate less on maintenance and more on data analysis. Sync your Data from MySQL to SnowflakeGet a DemoTry itSync your Data from Salesforce to SnowflakeGet a DemoTry itSync your Data from MongoDB to SnowflakeGet a DemoTry it Methods to transfer data from MySQL to Snowflake: Method 1: How to Connect MySQL to Snowflake using Custom Code Prerequisites You should have a Snowflake Account. If you don’t have one, check out Snowflake and register for a trial account. A MySQL server with your database. You can download it from MySQL’s official website if you don’t have one. Let’s examine the step-by-step method for connecting MySQL to Snowflake using the MySQL Application Interface and Snowflake Web Interface. Step 1: Extract Data from MySQL I created a dummy table called cricketers in MySQL for this demo. You can click on the rightmost table icon to view your table. Next, we need to save a .csv file of this table in our local storage to later load it into Snowflake. You can do this by clicking on the icon next to Export/Import. This will automatically save a .csv file of the table that is selected on your local storage. Step 2: Create a new Database in Snowflake Now, we need to import this table into Snowflake. Log into your Snowflake account, click Data>Databases, and click the +Database icon on the right-side panel to create a new database. For this guide, I have already made a database called DEMO. Step 3: Create a new Table in that database Now click DEMO>PUBLIC>Tables, click the Create button, and select the From File option from the drop-down menu. A Dropbox will appear where you can drag and drop your .csv file. Select and create a new table and give it a name. You can also choose from existing tables, and your data will be appended to that table. Step 4: Edit your table schema Click next. In this dialogue box, you can edit the schema. After modifying the schema according to your needs, click the load button. This will start loading your table data from the .csv file to Snowflake. Step 5: Preview your loaded table Once the loading process has been completed, you can view your data by clicking the preview button. Note: An alternative method of moving data is to create an Internal/External stage in Snowflake and load data into it. Limitations of Manually Migrating Data from MySQL to Snowflake: Error-prone: Custom coding and SQL Queries introduce a higher risk of errors potentially leading to data loss or corruption. Time-Consuming: Handling tables for large datasets is highly time-consuming. Orchestration Challenges: Manually migrating data needs more monitoring, alerting, and progress-tracking features. Method 2: How to Connect MySQL to Snowflake using an Automated ETL Platform Prerequisites: To set up your pipeline, you need a LIKE.TG account. If you don’t have one, you can visit LIKE.TG . A Snowflake account. A MySQL server with your database. Step 1:Connect your MySQL account to LIKE.TG ’s Platform. To begin with, I am logging in to my LIKE.TG platform. Next, create a new pipeline by clicking the Pipelines and the +Create button. LIKE.TG provides built-in MySQL integration that can connect to your account within minutes. Choose MySQL as the source and fill in the necessary details. Enter your Source details and click on TEST & CONTINUE. Next, Select all the objects that you want to replicate. Objects are nothing but the tables. Step 2: Connect your Snowflake account to LIKE.TG ’s Platform You have successfully connected your source and destination with these two simple steps. From here, LIKE.TG will take over and move your valuable data from MySQL to Snowflake. Advantages of using LIKE.TG : Auto Schema Mapping: LIKE.TG eliminates the tedious task of schema management. It automatically detects the schema of incoming data and maps it to the destination schema. Incremental Data Load: Allows the transfer of modified data in real-time, ensuring efficient bandwidth utilization on both ends. Data Transformation: It provides a simple interface for perfecting, modifying, and enriching the data you want to transfer. Note: Alternatively, you can use SaaS ETL platforms like Estuary or Airbyte to migrate your data. Best Practices for Data Migration: Examine Data and Workloads: Before migrating, constantly evaluate the schema, volume of your data, and kinds of queries currently running in your MySQL databases. Select the Appropriate Migration Technique: Handled ETL Procedure: This procedure is appropriate for smaller datasets or situations requiring precise process control. It requires manually loading data into Snowflake after exporting it from MySQL (for example, using CSV files). Using Snowflake’s Staging: For larger datasets, consider utilizing either the internal or external stages of Snowflake. Using a staging area, you can import the data into Snowflake after exporting it from MySQL to a CSV or SQL dump file. Validation of Data and Quality Assurance: Assure data integrity before and after migration by verifying data types, restrictions, and completeness. Verify the correctness and consistency of the data after migration by running checks. Enhance Information for Snowflake: Take advantage of Snowflake’s performance optimizations. Utilize clustering keys to arrange information. Make use of Snowflake’s built-in automatic query optimization tools. Think about using query pattern-based partitioning methods. Manage Schema Changes and Data Transformations: Adjust the MySQL schema to meet Snowflake’s needs. Snowflake supports semi-structured data, although the structure of the data may need to be changed. Plan the necessary changes and carry them out during the migration process. Verify that the syntax and functionality of SQL queries are compatible with Snowflake. Troubleshooting Common Issues : Problems with Connectivity: Verify that Snowflake and MySQL have the appropriate permissions and network setup. Diagnose connectivity issues as soon as possible by utilizing monitoring and logging technologies. Performance bottlenecks: Track query performance both before and after the move. Optimize SQL queries for the query optimizer and architecture of Snowflake. Mismatches in Data Type and Format: Identify and resolve format and data type differences between Snowflake and MySQL. When migrating data, make use of the proper data conversion techniques. Conclusion: You can now seamlessly connect MySQL to Snowflake using manual or automated methods. The manual method will work if you seek a more granular approach to your migration. However, if you are looking for an automated and zero solution for your migration, book a demo with LIKE.TG . FAQ on MySQL to Snowflake How to transfer data from MySQL to Snowflake? Step 1: Export Data from MySQLStep 2: Upload Data to SnowflakeStep 3: Create Snowflake TableStep 4: Load Data into Snowflake How do I connect MySQL to Snowflake? 1. Snowflake Connector for MySQL2. ETL/ELT Tools3. Custom Scripts Does Snowflake use MySQL? No, Snowflake does not use MySQL. How to get data from SQL to Snowflake? Step 1: Export DataStep 2: Stage the DataStep 3: Load Data How to replicate data from SQL Server to Snowflake? 1. Using ETL/ELT Tools2. Custom Scripts3. Database Migration Services
 How to Load Data from PostgreSQL to Redshift: 2 Easy Methods
How to Load Data from PostgreSQL to Redshift: 2 Easy Methods
Are you tired of locally storing and managing files on your Postgres server? You can move your precious data to a powerful destination such as Amazon Redshift, and that too within minutes.Data engineers are given the task of moving data between storage systems like applications, databases, data warehouses, and data lakes. This can be exhaustive and cumbersome. You can follow this simple step-by-step approach to transfer your data from PostgreSQL to Redshift so that you don’t have any problems with your data migration journey. Why Replicate Data from Postgres to Redshift? Analytics: Postgres is a powerful and flexible database, but it’s probably not the best choice for analyzing large volumes of data quickly. Redshift is a columnar database that supports massive analytics workloads. Scalability: Redshift can quickly scale without any performance problems, whereas Postgres may not efficiently handle massive datasets. OLTP and OLAP: Redshift is designed for Online Analytical Processing (OLAP), making it ideal for complex queries and data analysis. Whereas, Postgres is an Online Transactional Processing (OLTP) database optimized for transactional data and real-time operations. Load Data from PostgreSQL to RedshiftGet a DemoTry itLoad Data from MongoDB to RedshiftGet a DemoTry itLoad Data from Salesforce to RedshiftGet a DemoTry it Methods to Connect or Move PostgreSQL to Redshift Method 1: Connecting Postgres to Redshift Manually Prerequisites: Postgres Server installed on your local machine. Billing enabled AWS account. Step 1: Configure PostgreSQL to export data as CSV Step 1. a) Go to the directory where PostgreSQL is installed. Step 1. b) Open Command Prompt from that file location. Step 1. c) Now, we need to enter into PostgreSQL. To do so, use the command: psql -U postgres Step 1. d) To see the list of databases, you can use the command: \l I have already created a database named productsdb here. We will be exporting tables from this database. This is the table I will be exporting. Step 1. e) To export as .csv, use the following command: \copy products TO '<your_file_location><your_file_name>.csv' DELIMITER ',' CSV HEADER; Note: This will create a new file at the mentioned location. Go to your file location to see the saved CSV file. Step 2: Load CSV to S3 Bucket Step 2. a) Log Into your AWS Console and select S3. Step 2. b) Now, we need to create a new bucket and upload our local CSV file to it. You can click Create Bucket to create a new bucket. Step 2. c) Fill in the bucket name and required details. Note: Uncheck Block Public Access Step 2. d) To upload your CSV file, go to the bucket you created. Click on upload to upload the file to this bucket. You can now see the file you uploaded inside your bucket. Step 3: Move Data from S3 to Redshift Step 3. a) Go to your AWS Console and select Amazon Redshift. Step 3. b) For Redshift to load data from S3, it needs permission to read data from S3. To assign this permission to Redshift, we can create an IAM role for that and go to security and encryption. Click on Manage IAM roles followed by Create IAM role. Note: I will select all s3 buckets. You can select specific buckets and give access to them. Click Create. Step 3. c) Go back to your Namespace and click on Query Data. Step 3. d) Click on Load Data to load data in your Namespace. Click on Browse S3 and select the required Bucket. Note: I don’t have a table created, so I will click Create a new table, and Redshift will automatically create a new table. Note: Select the IAM role you just created and click on Create. Step 3. e) Click on Load Data. A Query will start that will load your data from S3 to Redshift. Step 3. f) Run a Select Query to view your table. Method 2: Using LIKE.TG Data to connect PostgreSQL to Redshift Prerequisites: Access to PostgreSQL credentials. Billing Enabled Amazon Redshift account. Signed Up LIKE.TG Data account. Step 1: Create a new Pipeline Step 2: Configure the Source details Step 2. a) Select the objects that you want to replicate. Step 3: Configure the Destination details. Step 3. a) Give your destination table a prefix name. Note: Keep Schema mapping turned on. This feature by LIKE.TG will automatically map your source table schema to your destination table. Step 4: Your Pipeline is created, and your data will be replicated from PostgreSQL to Amazon Redshift. Limitations of Using Custom ETL Scripts These challenges have an impact on ensuring that you have consistent and accurate data available in your Redshift in near Real-Time. The Custom ETL Script method works well only if you have to move data only once or in batches from PostgreSQL to Redshift. The Custom ETL Script method also fails when you have to move data in near real-time from PostgreSQL to Redshift. A more optimal way is to move incremental data between two syncs from Postgres to Redshift instead of full load. This method is called the Change Data Capture method. When you write custom SQL scripts to extract a subset of data often those scripts break as the source schema keeps changing or evolving. Additional Resources for PostgreSQL Integrations and Migrations How to load data from postgresql to biquery Postgresql on Google Cloud Sql to Bigquery Migrate Data from Postgres to MySQL How to migrate Data from PostgreSQL to SQL Server Export a PostgreSQL Table to a CSV File Conclusion This article detailed two methods for migrating data from PostgreSQL to Redshift, providing comprehensive steps for each approach. The manual ETL process described in the second method comes with various challenges and limitations. However, for those needing real-time data replication and a fully automated solution, LIKE.TG stands out as the optimal choice. FAQ on PostgreSQL to Redshift How can the data be transferred from Postgres to Redshift? Following are the ways by which you can connect Postgres to Redshift1. Manually, with the help of the command line and S3 bucket2. Using automated Data Integration Platforms like LIKE.TG . Is Redshift compatible with PostgreSQL? Well, the good news is that Redshift is compatible with PostgreSQL. The slightly bad news, however, is that these two have several significant differences. These differences will impact how you design and develop your data warehouse and applications. For example, some features in PostgreSQL 9.0 have no support from Amazon Redshift. Is Redshift faster than PostgreSQL? Yes, Redshift works faster for OLAP operations and retrieves data faster than PostgreSQL. How to connect to Redshift with psql? You can connect to Redshift with psql in the following steps1. First, install psql on your machine.2. Next, Use this command to connect to Redshift:psql -h your-redshift-cluster-endpoint -p 5439 -U your-username -d your-database3. It will prompt for the password. Enter your password, and you will be connected to Redshift. Want to take LIKE.TG for a spin? Sign Up for a 14-day free trial and experience the feature-rich LIKE.TG suite firsthand. Check out our transparent pricing to make an informed decision! Share your understanding of PostgreSQL to Redshift migration in the comments section below!
 How to Load Google Sheets Data to MySQL: 2 Easy Methods
How to Load Google Sheets Data to MySQL: 2 Easy Methods
While Google Sheets provides some impressive features, the capabilities for more advanced Data Visualization and Querying make the transfer from Google Sheets to MySQL Database useful. Are you trying to move data from Google Sheets to MySQL to leverage the power of SQL for data analysis, or are you simply looking to back up data from Google Sheets? Whichever is the case, this blog can surely provide some help. The article will introduce you to 2 easy methods to move data from Google Sheets to MySQL in real-time. Read along to decide which method suits you the best! Introduction to Google Sheets Google Sheets is a free web-based spreadsheet program that Google provides. It allows users to create and edit spreadsheets, but more importantly, it allows multiple users to collaborate on a single document, seeing your collaborators ’ contributions in real-time simultaneously. It’s part of the Google suite of applications, a collection of free productivity apps owned and maintained by Google. Despite being free, Google Sheets is a fully functional spreadsheet program, with most of the capabilities and features of more expensive spreadsheet software. Google Sheets is compatible with the most popular spreadsheet formats so that you can continue your work. With Google Sheets, like all Google Drive programs, your files are accessible via computer and/or mobile devices. To learn more about Google Sheets. Introduction to MySQL MySQL is an open-source relational database management system or RDMS, and it is managed using Structured Query Language or SQL, hence its name. MySQL was originally developed and owned by Swedish company MySQL AB, but Sun Microsystems acquired MySQL AB in 2008. In turn, Sun Microsystems was then bought by Oracle two years later, making them the present owners of MySQL. MySQL is a very popular database program that is used in several equally popular systems such as the LAMP stack (Linux, Apache, MySQL, Perl/PHP/Python), Drupal, and WordPress, just to name a few, and is used by many of the largest and most popular websites, including Facebook, Flickr, Twitter, and Youtube. MySQL is also incredibly versatile as it works on various operating systems and system platforms, from Microsoft Windows to Apple MacOS. Move Google Sheets Data to MySQL Using These 2 Methods There are several ways that data can be migrated from Google Sheets to MySQL. A common method to import data from Google Sheets to MySQL is by using the Google Sheets API along with MySQL connectors. Out of them, these 2 methods are the most feasible: Method 1: Manually using the command line Method 2: Using LIKE.TG to Set Up Google Sheets to MySQL Integration Load Data from Google Sheets to MySQLGet a DemoTry itLoad Data from Google Ads to MySQLGet a DemoTry itLoad Data from Salesforce to MySQLGet a DemoTry it Method 1: Connecting Google Sheets to MySQL Manually Using the Command Line Moving data from Google Sheets to MySQL involves various steps. This example demonstrates how to connect to create a table for the product listing data in Google Sheets, assuming that the data should be in two columns: Id Name To do this migration, you can follow these steps: Step 1: Prepare your Google Sheets Data Firstly, you must ensure that the data in your Google Sheets is clean and formatted correctly. Then, to export your Google Sheets data, click on File > Download and choose a suitable format for MySQL import. CSV (Comma-separated values) is a common choice for this purpose. After this, your CSV file will get downloaded to your local machine. Step 2: Create a MySQL database and Table Login to your MySQL server using the command prompt. Create a database using the following command: CREATE DATABASE your_database_name; Use that Database by running the command: Use your_database_name; Now, create a table in your database using the following command: CREATE TABLE your_table_name ( column1_name column1_datatype, column2_name column2_datatype, …… ); Step 3: Upload your CSV data to MySQL Use the LOAD DATA INFILE command to import the CSV file. The command will look something like this: LOAD DATA INFILE '/path/to/your/file.csv' INTO TABLE your_table_name FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' IGNORE 1 ROWS; Note: The file path should be the absolute path to where the CSV file is stored on the server. If you’re importing the file from your local machine to a remote server, you might need to use tools like PuTTY to download the pscp.exe file. Then, you can use that command to load your CSV file from your local machine to Ubuntu and then import that data to your MySQL database. After running the above command, your data will be migrated from Google Sheets to MySQL. To understand this better, have a look at an example: Step 6: Clean Up and Validate Review the data. Check for any anomalies or issues with the imported data. Run some queries to validate the imported data. Limitations and Challenges of Using the Command Line Method to Connect Google Sheets to MySQL Complex: It requires technical knowledge of SQL and command lines, so it could be difficult for people with no/less technical knowledge to implement. Error-prone: It provides limited feedback or error messages, making debugging challenging. Difficult to scale: Scaling command-line solutions for larger datasets or more frequent updates gets trickier and error-prone. Method 2:Connecting Google Sheets to MySQL Integration Using LIKE.TG . The abovementioned methods could be time-consuming and difficult to implement for people with little or no technical knowledge. LIKE.TG is a no-code data pipeline platform that can automate this process for you. You can transfer your Google Sheet data to MySQL using just two steps: Step 1: Configure the Source Log into your LIKE.TG Account Go to Pipelines and select the ‘create’ option. Select ‘Google Sheets’ as your source. Fill in all the required fields and click on Test & Continue. Step 2: Configure the Destination Select MySQL as your destination. Fill out the required fields and click on Save & Continue. With these extremely simple steps, you have created a data pipeline to migrate your data seamlessly from Google Sheets to MySQL. Advantages of Using LIKE.TG to Connect Google Sheets to MySQL Database The relative simplicity of using LIKE.TG as a data pipeline platform, coupled with its reliability and consistency, takes the difficulty out of data projects. You can also read our article about Google Sheets to Google Data Studio. It was great. All I had to do was do a one-time setup and the pipelines and models worked beautifully. Data was no more the bottleneck – Abhishek Gadela, Solutions Engineer, Curefit Why Connect Google Sheets to MySQL Database? Real-time Data Updates: By syncing Google Sheets with MySQL, you can keep your spreadsheets up to date without updating them manually. Centralized Data Management: In MySQL, large datasets are stored and managed centrally to facilitate a consistent view across the various Google Sheets. Historical Data Analysis: Google Sheets has limits on historical data. Syncing data to MySQL allows for long-term data retention and analysis of historical trends over time. Scalability: MySQL can handle enormous datasets efficiently, tolerating expansion and complicated data structures better than spreadsheets alone. Data Security: Control access rights and encryption mechanisms in MySQL to secure critical information Additional Resources on Google Sheets to MYSQL More on Google Script Connect To MYSQL Conclusion The blog provided a detailed explanation of 2 methods to set up your Google Sheets to MySQL integration. Although effective, the manual command line method is time-consuming and requires a lot of code. You can use LIKE.TG to import data from Google Sheets to MySQL and handle the ETL process. To learn more about how to import data from various sources to your desired destination, sign up for LIKE.TG ’s 14-day free trial. FAQ on Google Sheets to MySQL Can I connect Google Sheets to SQL? Yes, you can connect Google Sheets to SQL databases. How do I turn a Google Sheet into a database? 1. Use Google Apps script2. Third-party add-ons3. Use Formulas and Functions How do I sync MySQL to Google Sheets? 1. Use Google Apps script2. Third-party add-ons3. Google Cloud Functions and Google Cloud SQL Can Google Sheets pull data from a database? Yes, Google Sheets can pull data from a database. How do I import Google Sheets to MySQL? 1. Use Google Apps script2. Third-party add-ons2. CSV Export and Import Share your experience of connecting Google Sheets to MySQL in the comments section below!
 How To Migrate a MySQL Database Between Two Servers
How To Migrate a MySQL Database Between Two Servers
There are many use cases when you must migrate MySQL database between 2 servers, like cloning a database for testing, a separate database for running reports, or completely migrating a database system to a new server. Broadly, you will take a data backup on the first server, transfer it remotely to the destination server, and finally restore the backup on the new MySQL instance. This article will walk you through the steps to migrate MySQL Database between 2 Servers using 3 simple steps. Additionally, we will explore the process of performing a MySQL migration, using copy MySQL database from one server to another operation. This process is crucial when you want to move your MySQL database to another server without losing any data or functionality. We will cover the necessary steps and considerations involved in successfully completing a MySQL migration. So, whether you are looking to clone a database, create a separate database for reporting purposes, or completely migrate your database to a new server, this guide will provide you with the information you need. Steps to Migrate MySQL Database Between 2 Servers Let’s understand the steps to migrate the MySQL database between 2 servers. Understanding the process of transferring MySQL databases from one server to another is crucial for maintaining data integrity and continuity of services. To migrate MySQL database seamlessly, ensure both source and target servers are compatible. Below are the steps you can follow to understand how to migrate MySQL database between 2 servers: Step 1: Backup the Data Step 2: Copy the Database Dump on the Destination Server Step 3: Restore the Dump‘ Want to migrate your SQL data effortlessly? Check out LIKE.TG ’s no-code data pipeline that allows you to migrate data from any source to a destination with just a few clicks. Start your 14 days trial now for free! Get Started with LIKE.TG for Free 1) Backup the Data The first step to migrate MySQL database is to take a dump of the data that you want to transfer. This operation will help you move mysql database to another server. To do that, you will have to use mysqldump command. The basic syntax of the command is: mysqldump -u [username] -p [database] > dump.sql If the database is on a remote server, either log in to that system using ssh or use -h and -P options to provide host and port respectively. mysqldump -P [port] -h [host] -u [username] -p [database] > dump.sql There are various options available for this command, let’s go through the major ones as per the use case. A) Backing Up Specific Databases mysqldump -u [username] -p [database] > dump.sql This command dumps specified databases to the file. You can specify multiple databases for the dump using the following command: mysqldump -u [username] -p --databases [database1] [database2] > dump.sql You can use the –all-databases option to backup all databases on the MySQL instance. mysqldump -u [username] -p --all-databases > dump.sql B) Backing Up Specific Tables The above commands dump all the tables in the specified database, if you need to take backup of some specific tables, you can use the following command: mysqldump -u [username] -p [database] [table1] [table2] > dump.sql C) Custom Query If you want to backup data using some custom query, you will need to use the where option provided by mysqldump. mysqldump -u [username] -p [database] [table1] --where="WHERE CLAUSE" > dump.sql Example: mysqldump -u root -p testdb table1 --where="mycolumn = myvalue" > dump.sql Note: By default, mysqldump command includes DROP TABLE and CREATE TABLE statements in the created dump. Hence, if you are using incremental backups or you specifically want to restore data without deleting previous data, make sure you use the –no-create-info option while creating a dump. mysqldump -u [username] -p [database] --no-create-info > dump.sql If you need just to copy the schema but not the data, you can use –no-data option while creating the dump. mysqldump -u [username] -p [database] --no-data > dump.sql Other use cases Here’s a list of uses for the mysqldump command based on use cases: To backup a single database: mysqldump -u [username] -p [database] > dump.sql To backup multiple databases: mysqldump -u [username] -p --databases [database1] [database2] > dump.sql To backup all databases on the instance: mysqldump -u [username] -p --all-databases > dump.sql To backup specific tables: mysqldump -u [username] -p [database] [table1] [table2] > dump.sql To backup data using some custom query: mysqldump -u [username] -p [database] [table1] --where="WHERE CLAUSE" > dump.sql Example: mysqldump -u root -p testdb table1 --where="mycolumn = myvalue" > dump.sql To copy only the schema but not the data: mysqldump -u [username] -p [database] --no-data > dump.sq To restore data without deleting previous data (incremental backups): mysqldump -u [username] -p [database] --no-create-info > dump.sql 2) Copy the Database Dump on the Destination Server Once you have created the dump as per your specification, the next step to migrate MySQL database is to use the data dump file to move the MySQL database to another server (destination). You will have to use the “scp” command for that. Scp -P [port] [dump_file].sql [username]@[servername]:[path on destination] Examples: scp dump.sql [email protected]:/var/data/mysql scp -P 3306 dump.sql [email protected]:/var/data/mysql To copy to a single database, use this syntax: scp all_databases.sql [email protected]:~/ For a single database: scp database_name.sql [email protected]:~/ Here’s an example: scp dump.sql [email protected]:/var/data/mysql scp -P 3306 dump.sql [email protected] 3) Restore the Dump The last step in MySQL migration is restoring the data on the destination server. MySQL command directly provides a way to restore to dump data to MySQL. mysql -u [username] -p [database] < [dump_file].sql Example: mysql -u root -p testdb < dump.sql Don’t specify the database in the above command if your dump includes multiple databases. mysql -u root -p < dump.sql For all databases: mysql -u [user] -p --all-databases < all_databases.sql For a single database: mysql -u [user] -p newdatabase < database_name.sql For multiple databases: mysql -u root -p < dump.sql Limitations with Dumping and Importing MySQL Data Dumping and importing MySQL data can present several challenges: Time Consumption: The process can be time-consuming, particularly for large databases, due to creating, transferring, and importing dump files, which may slow down with network speed and database size. Potential for Errors: Human error is a significant risk, including overlooking steps, misconfiguring settings, or using incorrect parameters with the mysqldump command. Data Integrity Issues: Activities on the source database during the dump process can lead to data inconsistencies in the exported SQL dump. Measures like putting the database in read-only mode or locking tables can mitigate this but may impact application availability. Memory Limitations: Importing massive SQL dump files may encounter memory constraints, necessitating adjustments to MySQL server configurations on the destination machine. Migrate MySQL to MySQLGet a DemoTry itMigrate MySQL to BigQueryGet a DemoTry itMigrate MySQL to SnowflakeGet a DemoTry it Conclusion Following the above-mentioned steps, you can migrate MySQL database between two servers easily, but to migrate MySQL database to another server can be quite cumbersome activity especially if it’s repetitive. An all-in-one solution like LIKE.TG takes care of this effortlessly and helps manage all your data pipelines in an elegant and fault-tolerant manner. LIKE.TG will automatically catalog all your table schemas and do all the necessary transformations to copy MySQL database from one server to another. LIKE.TG will fetch the data from your source MySQL server incrementally and restore that seamlessly onto the destination MySQL instance. LIKE.TG will also alert you through email and Slack if there are schema changes or network failures. All of this can be achieved from the LIKE.TG UI, with no need to manage servers or cron jobs. VISIT OUR WEBSITE TO EXPLORE LIKE.TG Want to take LIKE.TG for a spin? Sign Up for a 14-day free trial and experience the feature-rich LIKE.TG suite firsthand. You can also have a look at the unbeatable LIKE.TG pricing that will help you choose the right plan for your business needs. Share your experience of learning about the steps to migrate MySQL database between 2 servers in the comments section below.
 How to Migrate from MariaDB to MySQL in 2 Easy Methods
How to Migrate from MariaDB to MySQL in 2 Easy Methods
MariaDB and MySQL are two widely popular relational databases that boast many of the largest enterprises as their clientele. Both MariaDB and MySQL are available in two versions – A community-driven version and an enterprise version. However, the distribution of features and development processes in the community and enterprise versions of MySQL and MariaDB differ from each other. Even though MariaDB claims itself as a drop-in replacement for MySQL, because of the terms of licensing and enterprising support contracts, many organizations migrate between these two according to their policy changes. This blog post will cover the details of how to move data from MariaDB to MySQL. What is MariaDB? MariaDB is a RDBMS built on SQL, created by the professionals behind the development of MySQL intended to provide technical efficiency and versatility. You can use this database for many use cases, which include data warehousing, and managing your data. Its relational nature will be helpful for you. And, the open-source community will provide you with the resources required. What is MySQL? MySQL is one of the renowned open source relational database management systems. You can store and arrange data in structured formats in tables with columns and rows. You can define, query, manage, and manipulate your data using SQL. You can use MySQL to develop websites, and applications. Examples of companies who used this are Uber, Airbnb, Pinterest, and Shopify. They use MySQL for their database management requirements because of its versatility and capabilities to in manage large operations. Methods to Integrate MariaDB with MySQL Method 1: Using LIKE.TG Data to Connect MariaDB to MySQL A fully managed, No-Code Data Pipeline platform like LIKE.TG Data allows you to seamlessly migrate your data from MariaDB to MySQL in just two easy steps. No specialized technical expertise is required to perform the migration. Method 2: Using Custom Code to Connect MariaDB to MySQL Use mysqldump to migrate your data from MariaDB to MySQL by writing a couple of commands mentioned in the blog. However this is a costly operation that can also overload the primary database. Method 3: Using MySQL Workbench You can also migrate your data from MariaDB to MySQL using the MySQL Migration Wizard. However, it has limitations on the size of migrations that it can handle effectively, and as a result, it cannot handle very large datasets. Get Started with LIKE.TG for Free Method 1: Using LIKE.TG Data to Connect MariaDB to MySQL The steps involved are, Step 1: Configure MariaDB as Source Step 2: Configure MySQL as Destination Check out why LIKE.TG is the Best: Schema Management: LIKE.TG takes away the tedious task of schema management & automatically detects the schema of incoming data and maps it to the destination schema. Incremental Data Load: LIKE.TG allows the transfer of data that has been modified in real-time. This ensures efficient utilization of bandwidth on both ends.’ Data Transformation: It provides a simple interface to perfect, modify, and enrich the data you want to transfer. Secure: LIKE.TG has a fault-tolerant architecture that ensures that the data is handled in a secure, consistent manner with zero data loss. Get Started with LIKE.TG for Free Method 2: Using Custom Code to Connect MariaDB to MySQL Since both databases provide the same underlying tools, it is very easy to copy data from MariaDB to MySQL. The following steps detail how to accomplish this. Step 1: From the client machine, use the below command to create a complete dump of the database in MariaDB. mysqldump -u username -p database_name > source_dump.sql This command creates a source_dump.sql file. Step 2: Move the file to a machine that can access the target MySQL database. If the same machine has access to the target database, this step is not relevant. Step 3: Log in as root to the target MySQL database mysql -u root -p password Step 4: In the MySQL shell, execute the below command to create a database. CREATE DATABASE target_database;Where target_database is the name of the database to which data is to be imported. Step 5: Exit the MySQL shell and go to the location where the source_dump.sql is stored. Step 6: Execute the below command to load the database from the dump file. mysql -u username -p new_database < source_dump.sql That concludes the process. The target database is now ready for use and this can be verified by logging in to the MySQL shell and executing a SHOW TABLES command. Even though this approach provides a simple way for a one-off copy operation between the two databases, this method has a number of limitations. Let’s have a look at the limitations of this approach. MariaDB to MySQL: Limitations of Custom Code Approach In most cases, the original database will be online while the customer attempts to copy the data. mysqldump command is a costly execution and can lead to the primary database being unavailable or slow during the process. While the mysqldump command is being executed, new data could come in resulting in some leftover data. This data needs to be handled separately. This approach works fine if the copying operation is a one-off process. In some cases, organizations may want to maintain an exact running replica of MariaDB in MySQL and then migrate. This will need a complex script that can use the binary logs to create a replica. Even though MariaDB claims itself as a drop-in replacement, the development has been diverging now and there are many incompatibilities between versions as described here. This may lead to problems while migrating using the above approach. Migrate from MariaDB to MySQLGet a DemoTry itMigrate from MariaDB to PostgreSQLGet a DemoTry it Method 3: Using MySQL Workbench In MySQL Workbench, navigate yourself to Database> Migrate to initiate the migration wizard. Go to Overview page -> select Open ODBC Manager. This is done to make sure the ODBC drive for MySQL Server is installed. If not, useMySQL installer used to install MySQL Workbench for installing it. Select Start Migration. Click and specify details on source database -> test the connection -> select Next. Configure the target database details and verify connection. Get the wizard extracting the schema list from the source server -> select the schema for migrating. The migration will begin once you mention the objects you want to migrate on the Source Objects page. Make edits in the generated SQL for all objects -> edit migration issues, or change the name of the target object and columns on the View drop-down of Manual Edit. Go to the next page -> choose create schema in target RDBMS -> Give it sometime to finish the creation. And check the created objects on the Create Target Results page. In the Data Transfer Settings page, configure data migration -> Select Next to move your data. Check the migration report after the process -> select Finish to close the wizard. You can check the consistency of source data and schema by logging into the target database. Also, check if the table and row counts match. SELECT COUNT (*) FROM table_name; Get MySQL row count of tables in your database. SELECT table_name, table_rows FROM information_schema.tables WHERE table_schema = 'classicmodels' ORDER BY table_name; 14. Check the database size. SELECT TABLE_SCHEMA AS `Database`, TABLE_NAME AS `Table`, ROUND((DATA_LENGTH + INDEX_LENGTH) / 1024 / 1024) AS `Size (MB)` FROM information_schema.TABLES GROUP BY table_schema; Understand the size of the table. SELECT table_name AS "Table", ROUND(((data_length + index_length) / 1024 / 1024), 2) AS "Size (MB)" FROM information_schema.TABLES WHERE table_schema = "database_name" ORDER BY (data_length + index_length) DESC; Limitations of using MySQL Workbench to Migrate MariaDB to MySQL: Size Constraints: MySQL workbench has limitations on the size of migrations that it can handle effectively. It cannot be used for very large databases. Limited Functionality: It cannot deal with complex data structures efficiently. It requires manual interventions or additional tools to do so when using MySQL workbench. Use Cases of MariaDB to MySQL Migration MySQL is suitable for heavily trafficked websites and mission-critical applications. MySQL can handle terabyte-sized databases and also supports high-availability database clustering. When you migrate MariaDB to MySQL, you can manage databases of websites and applications with high traffic. Popular applications that use the MySQL database include TYPO3, MODx, Joomla, WordPress, phpBB, MyBB, and Drupal. MySQL is one of the most popular transactional engines for eCommerce platforms. Thus, when you convert MariaDB to MySQL, it becomes easy to use to manage customer data, transactions, and product catalogs. When you import MariaDB to MySQL, it assists you in fraud detection. MySQL helps to analyze transactions, claims etc. in real-time, along with trends or anomalous behavior to prevent fraudulent activities. Learn More About: How to Migrate MS SQL to MySQL in 3 Methods Migrate Postgres to MySQL Connecting FTP to MySQL Conclusion This blog explained two methods that you can use to import MariaDB to MySQL. The manual custom coding method provides a simple approach for a one-off migration between MariaDB and MySQL. Among the methods provided, determining which method is to be used depends on your use case. You can go for an automated data pipeline platform if you want continuous or periodic copying operations. Sign Up for a 14-day free trial FAQ on MariaDB to MySQL How do I switch from MariaDB to MySQL? You can transfer your data from MariaDB to MySQL using custom code or automated pipeline platforms like LIKE.TG Data. How to connect MariaDB to MySQL? You can do this by using custom codes. The steps include:1. Create a Dump of MariaDB 2. Log in to MySQL as a Root User3. Create a MySQL Database4. Restore the Data5. Verify and Test How to upgrade MariaDB to MySQL? Upgrading from MariaDB to MySQL would involve fully backing the MariaDB databases. Afterward, uninstall MariaDB, install MySQL, and restore from the created backup. Be sure that the MySQL version supports all features used in your setup. Is MariaDB compatible with MySQL? MariaDB’s data files are generally binary compatible with those from the equivalent MySQL version.
 How To Move Your Data From MySQL to Redshift: 2 Easy Methods
How To Move Your Data From MySQL to Redshift: 2 Easy Methods
Is your MySQL server getting too slow for analytical queries now? Or are you looking to join data from another Database while running queries? Whichever your use case, it is a great decision to move the data from MySQL to Redshift for analytics. This post covers the detailed steps you need to follow to migrate data from MySQL to Redshift. You will also get a brief overview of MySQL and Amazon Redshift. You will also explore the challenges involved in connecting MySQL to Redshift using custom ETL scripts. Let’s get started. Methods to Set up MySQL to Redshift Method 1: Using LIKE.TG to Set up MySQL to Redshift Integration Method 2: Incremental Load for MySQL to Redshift Integration Method 3: Change Data Capture With Binlog Method 4: Using custom ETL scripts Method 1: Using LIKE.TG to Set up MySQL to Redshift Integration LIKE.TG is the only real-time ELT No-code Data Pipeline platform that cost-effectively automates data pipelines that are flexible to your needs. The following steps can be implemented to set up MySQL to Redshift Migration using LIKE.TG : Configure Source: Connect LIKE.TG Data with Oracle by providing a unique name for your Pipeline along with information about your MySQL database such as its name, IP Address, Port Number, Username, Password, etc. Integrate Data: Complete MySQL to Redshift Migration by providing your MySQL database and Redshift credentials such as your authorized Username and Password, along with information about your Host IP Address and Port Number value. You will also need to provide a name for your database and a unique name for this destination. Advantages of Using LIKE.TG There are a couple of reasons why you should opt for LIKE.TG over building your own solution to migrate data from CleverTap to Redshift. Automatic Schema Detection and Mapping: LIKE.TG scans the schema of incoming CleverTap automatically. In case of any change, LIKE.TG seamlessly incorporates the change in Redshift. Ability to Transform Data – LIKE.TG allows you to transfer data both before and after moving it to the Data Warehouse. This ensures that you always have analysis-ready data in your Redshift Data Warehouse. Incremental Data Load: LIKE.TG allows the transfer of data that has been modified in real-time. This ensures efficient utilization of bandwidth on both ends. Method 2: Incremental Load for MySQL to Redshift Integration You can follow the below-mentioned steps to connect MySQL to Redshift. Step 1: Dump the data into files Step 2: Clean and Transform Step 3: Upload to S3 and Import into Redshift Step 1. Dump the Data into Files Ways to Set up MySQL to Redshift Integration Method 1: Manually Set up MySQL to Redshift Integration Method 2: Using LIKE.TG Data to Set up MySQL to Redshift Integration Get Started with LIKE.TG for Free The most efficient way of loading data in Amazon Redshift is through the COPY command that loads CSV/JSON files into the Amazon Redshift. So, the first step is to bring the data in your MySQL database to CSV/JSON files. There are essentially two ways of achieving this: 1) Using mysqldump command. mysqldump -h mysql_host -u user database_name table_name --result-file table_name_data.sql The above command will dump data from a table table_name to the file table_name_data.sql. But, the file will not be in CSV/JSON format required for loading into Amazon Redshift. This is how a typical row may look like in the output file: INSERT INTO `users` (`id`, `first_name`, `last_name`, `gender`) VALUES (3562, ‘Kelly’, ‘Johnson’, 'F'),(3563,’Tommy’,’King’, 'M'); The above rows will need to be converted to the following format: "3562","Kelly","Johnson", "F" "3563","Tommy","King","M" 2) Query the data into a file. mysql -B -u user database_name -h mysql_host -e "SELECT * FROM table_name;" | sed "s/'/'/;s/t/","/g;s/^/"/;s/$/"/;s/n//g" > table_name_data.csv You will have to do this for all tables: for tb in $(mysql -u user -ppassword database_name -sN -e "SHOW TABLES;"); do echo .....; done Step 2. Clean and Transform There might be several transformations required before you load this data into Amazon Redshift. e.g. ‘0000-00-00’ is a valid DATE value in MySQL but in Redshift, it is not. Redshift accepts ‘0001-01-01’ though. Apart from this, you may want to clean up some data according to your business logic, you may want to make time zone adjustments, concatenate two fields, or split a field into two. All these operations will have to be done over files and will be error-prone. Step 3. Upload to S3 and Import into Amazon Redshift Once you have the files to be imported ready, you will upload them to an S3 bucket. Then run copy command: COPY table_name FROM 's3://my_redshift_bucket/some-path/table_name/' credentials 'aws_access_key_id=my_access_key;aws_secret_access_key=my_secret_key'; Again, the above operation has to be done for every table. Once the COPY has been run, you can check the stl_load_errors table for any copy failures. After completing the aforementioned steps, you can migrate MySQL to Redshift successfully. In a happy scenario, the above steps should just work fine. However, in real-life scenarios, you may encounter errors in each of these steps. e.g. : Network failures or timeouts during dumping MySQL data into files. Errors encountered during transforming data due to an unexpected entry or a new column that has been added Network failures during S3 Upload. Timeout or data compatibility issues during Redshift COPY. COPY might fail due to various reasons, a lot of them will have to be manually looked into and retried. Challenges of Connecting MySQL to Redshift using Custom ETL Scripts The custom ETL method to connect MySQL to Redshift is effective. However, there are certain challenges associated with it. Below are some of the challenges that you might face while connecting MySQL to Redshift: In cases where data needs to be moved once or in batches only, the custom script method works. This approach fails if you have to move data from MySQL to Redshift in real-time. Incremental load (change data capture) becomes tedious as there will be additional steps that you need to follow to achieve the connection. Often, when you write code to extract a subset of data, those scripts break as the source schema keeps changing or evolving. This can result in data loss. The process mentioned above is brittle, error-prone, and often frustrating. These challenges impact the consistency and accuracy of the data available in your Amazon Redshift in near real-time. These were the common challenges that most users find while connecting MySQL to Redshift. Method 3: Change Data Capture With Binlog The process of applying changes made to data in MySQL to the destination Redshift table is called Change Data Capture (CDC). You need to use the Binary Change Log (binlog) in order to apply the CDC technique to a MySQL database. Replication may occur almost instantly when change data is captured as a stream using Binlog. Binlog records table structure modifications like ADD/DROP COLUMN in addition to data changes like INSERT, UPDATE, and DELETE. Additionally, it guarantees that Redshift also deletes records that are removed from MySQL. Getting Started with Binlog When you use CDC with Binlog, you are actually writing an application that reads, transforms, and imports streaming data from MySQL to Redshift. You may accomplish this by using an open-source module called mysql-replication-listener. A streaming API for real-time data reading from MySQL bBnlog is provided by this C++ library. For a few languages, such as python-mysql-replication (Python) and kodama (Ruby), a high-level API is also offered. Drawbacks using Binlog Building your CDC application requires serious development effort. Apart from the above-mentioned data streaming flow, you will need to construct: Transaction management: In the event that a mistake causes your program to terminate while reading Binlog data, monitor data streaming performance. You may continue where you left off, thanks to transaction management. Data buffering and retry: Redshift may also stop working when your application is providing data. Unsent data must be buffered by your application until the Redshift cluster is back up. Erroneous execution of this step may result in duplicate or lost data. Table schema change support: A modification to the table schema The ALTER/ADD/DROP TABLE Binlog event is a native MySQL SQL statement that isn’t performed natively on Redshift. You will need to convert MySQL statements to the appropriate Amazon Redshift statements in order to enable table schema updates. Method 4: Using custom ETL scripts Step 1: Configuring a Redshift cluster on Amazon Make that a Redshift cluster has been built, and write down the database name, login, password, and cluster endpoint. Step 2: Creating a custom ETL script Select a familiar and comfortable programming language (Python, Java, etc.). Install any required libraries or packages so that your language can communicate with Redshift and MySQL Server. Step 3: MySQL data extraction Connect to the MySQL database. Write a SQL query to extract the data you need. You can use this query in your script to pull the data. Step 4: Data transformation You can perform various data transformations using Python’s data manipulation libraries like `pandas`. Step 5: Redshift data loading With the received connection information, establish a connection to Redshift. Run the required instructions in order to load the data. This might entail establishing schemas, putting data into tables, and generating them. Step 6: Error handling, scheduling, testing, deployment, and monitoring Try-catch blocks should be used to handle errors. Moreover, messages can be recorded to a file or logging service. To execute your script at predetermined intervals, use a scheduling application such as Task Scheduler (Windows) or `cron` (Unix-based systems). Make sure your script handles every circumstance appropriately by thoroughly testing it with a variety of scenarios. Install your script on the relevant environment or server. Set up your ETL process to be monitored. Alerts for both successful and unsuccessful completions may fall under this category. Examine your script frequently and make any necessary updates. Don’t forget to change placeholders with your real values (such as `}, `}, `}, etc.). In addition, think about enhancing the logging, error handling, and optimizations in accordance with your unique needs. Disadvantages of using ETL scripts for MySQL Redshift Integration Lack of GUI: The flow could be harder to understand and debug. Dependencies and environments: Without modification, custom scripts might not run correctly on every operating system. Timelines: Creating a custom script could take longer than constructing ETL processes using a visual tool. Complexity and maintenance: Writing bespoke scripts takes more effort in creation, testing, and maintenance. Restricted Scalability: Performance issues might arise from their inability to handle complex transformations or enormous volumes of data. Security issues: Managing sensitive data and login credentials in scripts needs close oversight to guarantee security. Error Handling and Recovery: It might be difficult to develop efficient mistake management and recovery procedures. In order to ensure the reliability of the ETL process, it is essential to handle various errors. Why Replicate Data From MySQL to Redshift? There are several reasons why you should replicate MySQL data to the Redshift data warehouse. Maintain application performance. Analytical queries can have a negative influence on the performance of your production MySQL database, as we have already discussed. It could even crash as a result of it. Analytical inquiries need specialized computer power and are quite resource-intensive. Analyze ALL of your data. MySQL is intended for transactional data, such as financial and customer information, as it is an OLTP (Online Transaction Processing) database. But, you should use all of your data, even the non-transactional kind, to get insights. Redshift allows you to collect and examine all of your data in one location. Faster analytics. Because Redshift is a data warehouse with massively parallel processing (MPP), it can process enormous amounts of data much faster. However, MySQL finds it difficult to grow to meet the processing demands of complex, contemporary analytical queries. Not even a MySQL replica database will be able to match Redshift’s performance. Scalability. Instead of the distributed cloud infrastructure of today, MySQL was intended to operate on a single-node instance. Therefore, time- and resource-intensive strategies like master-node setup or sharding are needed to scale beyond a single node. The database becomes even slower as a result of all of this. Above mentioned are some of the use cases of MySQL to Redshift replication. Before we wrap up, let’s cover some basics. Why Do We Need to Move Data from MySQL to Redshift? Every business needs to analyze its data to get deeper insights and make smarter business decisions. However, performing Data Analytics on huge volumes of historical data and real-time data is not achievable using traditional Databases such as MySQL. MySQL can’t provide high computation power that is a necessary requirement for quick Data Analysis. Companies need Analytical Data Warehouses to boost their productivity and run processes for every piece of data at a faster and efficient rate. Amazon Redshift is a fully managed Could Data Warehouse that can provide vast computing power to maintain performance and quick retrieval of data and results. Moving data from MySQL to Redshift allow companies to run Data Analytics operations efficiently. Redshift columnar storage increases the query processing speed. Conclusion This article provided you with a detailed approach using which you can successfully connect MySQL to Redshift. You also got to know about the limitations of connecting MySQL to Redshift using the custom ETL method. Big organizations can employ this method to replicate the data and get better insights by visualizing the data. Thus, connecting MySQL to Redshift can significantly help organizations to make effective decisions and stay ahead of their competitors.
 How to Replicate Postgres to Snowflake: 4 Easy Steps
How to Replicate Postgres to Snowflake: 4 Easy Steps
Snowflake’s architecture is defined newly from scratch, not an extension of the existing Big Data framework like Hadoop. It has a hybrid of the traditional shared-disk database and modern shared-nothing database architectures. Snowflake uses a central repository for persisted data that is accessible from all compute nodes in the data warehouse and processes queries using MPP (Massively Parallel Processing) compute clusters where each node in the cluster stores a portion of the data set. Snowflake processes using “Virtual Warehouses” which is an MPP compute cluster composed of multiple compute nodes. All components of Snowflake’s service run in a public Cloud-like AWS Redshift. This Data Warehouse is considered a cost-effective high performing analytical solution and is used by many organizations for critical workloads. In this post, we will discuss how to move real-time data from Postgres to Snowflake. So, read along and understand the steps to migrate data from Postgres to Snowflake. Method 1: Use LIKE.TG ETL to Move Data From Postgres to Snowflake With Ease Using LIKE.TG , official Snowflake ETL partner you can easily load data from Postgres to Snowflake with just 3 simple steps: Select your Source, Provide Credentials, and Load to Destination. LIKE.TG is the only real-time ELT No-code Data Pipeline platform that cost-effectively automates data pipelines that are flexible to your needs. SIGN UP HERE FOR A 14-DAY FREE TRIAL Step 1: Connect your PostgreSQL account to LIKE.TG ’s platform. LIKE.TG has an in-built PostgreSQL Integration that connects to your account within minutes. Read the documents to know the detailed configuration steps for each PostgreSQL variant. Step 2: Configure Snowflake as a Destination Perform the following steps to configure Snowflake as a Destination in LIKE.TG : By completing the above steps, you have successfully completed Postgres Snowflake integration. To know more, check out: PostgreSQL Source Connector Snowflake Destination Connector Check out some of the cool features of LIKE.TG : Incremental Data Load: LIKE.TG allows the transfer of data that has been modified in real-time. This ensures efficient utilization of bandwidth on both ends. Schema Management: LIKE.TG takes away the tedious task of schema management & automatically detects the schema of incoming data and maps it to the destination schema. Scalable Infrastructure: LIKE.TG has in-built integrations for 150+ sources that can help you scale your data infrastructure as required. Method 2: Write a Custom Code to Move Data from Postgres to Snowflake As in the above-shown figure, the four steps to replicate Postgres to Snowflake using custom code (Method 2) are as follows: 1. Extract Data from Postgres COPY TO command is the most popular and efficient method to extract data from the Postgres table to a file. We can also use the pg_dump utility for the first time for full data extraction. We will have a look at both methods. a. Extract Data Using the COPY Command As mentioned above, COPY TO is the command used to move data between Postgres tables and standard file-system files. It copies an entire table or the results of a SELECT query to a file: COPY table or sql_query TO out_file_name WITH options. Example: COPY employees TO 'C:tmpemployees_db.csv' WITH DELIMITER ',' CSV HEADER; COPY (select * from contacts where age < 45) TO 'C:tmpyoung_contacts_db.csv' WITH DELIMITER ',' CSV HEADER; Some frequently used options are: FORMAT: The format of the data to be written are text, CSV, or binary (default is text). ESCAPE: The character that should appear before a data character that matches the QUOTE value. NULL: Represents the string that is a null value. The default is N (backslash-N) in text and an unquoted empty string in CSV. ENCODING: Encoding of the output file. The default value is the current client encoding. HEADER: If it is set, on the output file, the first line contains the column names from the table. QUOTE: The quoting character to be used when data is quoted. The default is double-quote(“). DELIMITER: The character that separates columns within each line of the file. Next, we can have a look at how the COPY command can be used to extract data from multiple tables using a PL/PgSQL procedure. Here, the table named tables_to_extract contains details of tables to be exported. CREATE OR REPLACE FUNCTION table_to_csv(path TEXT) RETURNS void AS $ declare tables RECORD; statement TEXT; begin FOR tables IN SELECT (schema || '.' || table_name) AS table_with_schema FROM tables_to_extract LOOP statement := 'COPY ' || tables.table_with_schema || ' TO ''' || path || '/' || tables.table_with_schema || '.csv' ||''' DELIMITER '';'' CSV HEADER'; EXECUTE statement; END LOOP; return; end; $ LANGUAGE plpgsql; SELECT db_to_csv('/home/user/dir'/dump); -- This will create one csv file per table, in /home/user/dir/dump/ Sometimes you want to extract data incrementally. To do that, add more metadata like the timestamp of the last data extraction to the table tables_to_extract and use that information while creating the COPY command to extract data changed after that timestamp. Consider you are using a column name last_pull_time corresponding to each table in the table tables_to_extract which stores the last successful data pull time. Each time data in the table which are modified after that timestamp has to be pulled. The body of the loop in procedure will change like this: Here a dynamic SQL is created with a predicate comparing last_modified_time_stamp from the table to be extracted and last_pull_time from table list_of_tables. begin FOR tables IN SELECT (schema || '.' || table_name) AS table_with_schema, last_pull_time AS lt FROM tables_to_extract LOOP statement := 'COPY (SELECT * FROM ' || tables.table_with_schema || ' WHERE last_modified_time_stamp > ' || last_pull_time ') TO ' '' || path || '/' || tables.table_with_schema || '.csv' ||''' DELIMITER '';'' CSV HEADER'; EXECUTE statement; END LOOP; return; End; b. Extract Data Using the pg_dump As mentioned above, pg_dump is the utility for backing up a Postgres database or tables. It can be used to extract data from the tables also. Example syntax: pg_dump --column-inserts --data-only --table=<table> <database> > table_name.sql Here output file table_name.sql will be in the form of INSERT statements like INSERT INTO my_table (column1, column2, column3, ...) VALUES (value1, value2, value3, ...); This output has to be converted into a CSV file with the help of a small script in your favorites like Bash or Python. 2. Data Type Conversion from Postgres to Snowflake There will be domain-specific logic to be applied while transferring data. Apart from that following things are to be noted while migrating data to avoid surprises. Snowflake out-of-the-box supports a number of character sets including UTF-8. Check out the full list of encodings. Unlike many other cloud analytical solutions, Snowflake supports SQL constraints like UNIQUE, PRIMARY KEY, FOREIGN KEY, NOT NULL constraints. Snowflake by default has a rich set of data types. Below is the list of Snowflake data types and corresponding PostgreSQL types. Snowflake allows almost all of the date/time formats. The format can be explicitly specified while loading data to the table using the File Format Option which we will discuss in detail later. The complete list of supported date/time formats can be found. 3. Stage Data Files Before inserting data from Postgres to Snowflake table it needs to be uploaded to a temporary location which is called staging. There are two types of stages – internal and external. a. Internal Stage Each user and table is automatically allocated an internal stage for data files. It is also possible to create named internal stages. The user named and accessed as ‘@~’. The name of the table stage will be the same as that of the table. The user or table stages can’t be altered or dropped. The user or table stages do not support setting file format options. As mentioned above, Internal Named Stages can be created by the user using the respective SQL statements. It provides a lot of flexibility while loading data by giving options to you to assign file format and other options to named stages. While running DDL and commands like load data, SnowSQL is quite a handy CLI client which can be used to run those commands and is available in Linux/Mac/Windows. Read more about the tool and options. Below are some example commands to create a stage: Create a names stage: create or replace stage my_postgres_stage copy_options = (on_error='skip_file') file_format = (type = 'CSV' field_delimiter = '|' skip_header = 1); PUT command is used to stage data files to an internal stage. The syntax of the command is as given below : PUT file://path_to_file/filename internal_stage_name Example: Upload a file named cnt_data.csv in the /tmp/postgres_data/data/ directory to an internal stage named postgres_stage. put file:////tmp/postgres_data/data/cnt_data.csv @postgres_stage; There are many useful options that can be helpful to improve performance like set parallelism while uploading the file, automatic compression of data files, etc. More information about those options is listed here. b. External Stage Amazon S3 and Microsoft Azure are external staging locations currently supported by Snowflake. We can create an external stage with any of those locations and load data to a Snowflake table. To create an external stage on S3, IAM credentials have to be given. If the data is encrypted, then encryption keys should also be given. create or replace stage postgre_ext_stage url='s3://snowflake/data/load/files/' credentials=(aws_key_id='111a233b3c' aws_secret_key='abcd4kx5y6z'); encryption=(master_key = 'eSxX0jzYfIjkahsdkjamtnBKONDwOaO8='); Data to the external stage can be uploaded using AWS or Azure web interfaces. For S3 you can upload using the AWS web console or any AWS SDK or third-party tools. 4. Copy Staged Files from Postgres to Snowflake Table COPY INTO is the command used to load the contents of the staged file(s) from Postgres to the Snowflake table. To execute the command compute resources in the form of virtual warehouses are needed. You know more about it this command in the Snowflake ETL best practices. Example:To load from a named internal stage COPY INTO postgres_table FROM @postgres_stage; Loading from the external stage. Only one file is specified. COPY INTO my_external_stage_table FROM @postgres_ext_stage/tutorials/dataloading/contacts_ext.csv; You can even copy directly from an external location: COPY INTO postgres_table FROM s3://mybucket/snow/data/files credentials = (aws_key_id='$AWS_ACCESS_KEY_ID' aws_secret_key='$AWS_SECRET_ACCESS_KEY') encryption = (master_key = 'eSxX0jzYfIdsdsdsamtnBKOSgPH5r4BDDwOaO8=') file_format = (format_name = csv_format); Files can be specified using patterns. COPY INTO pattern_table FROM @postgre_stage file_format = (type = 'CSV') pattern='.*/.*/.*[.]csv[.]gz'; Some common format options for CSV format supported in the COPY command: COMPRESSION: Compression algorithm used for the input data files. RECORD_DELIMITER: Records or lines separator characters in an input CSV file. FIELD_DELIMITER: Character separating fields in the input file. SKIP_HEADER: How many header lines are to be skipped. DATE_FORMAT: To specify the date format. TIME_FORMAT: To specify the time format. Check out the full list of options. So, now you have finally loaded data from Postgres to Snowflake. Update Snowflake Table We have discussed how to extract data incrementally from PostgreSQL. Now we will look at how to migrate data from Postgres to Snowflake effectively. As we discussed in the introduction, Snowflake is not based on any big data framework and does not have any limitations for row-level updates like in systems like Hive. It supports row-level updates making delta data migration much easier. Basic idea is to load incrementally extracted data into an intermediate table and modify records in the final table as per data in the intermediate table. There are three popular methods to modify the final table once data is loaded into the intermediate table. Update the rows in the final table and insert new rows from the intermediate table which are not in the final table. UPDATE final_target_table t SET t.value = s.value FROM intermed_delta_table in WHERE t.id = in.id; INSERT INTO final_target_table (id, value) SELECT id, value FROM intermed_delta_table WHERE NOT id IN (SELECT id FROM final_target_table); Delete all records from the final table which are in the intermediate table. Then insert all rows from the intermediate table to the final table. DELETE .final_target_table f WHERE f.id IN (SELECT id from intermed_delta_table); INSERT final_target_table (id, value) SELECT id, value FROM intermed_table; MERGE statement: Inserts and updates can be done with a single MERGE statement and it can be used to apply changes in the intermediate table to the final table with one SQL statement. MERGE into final_target_table t1 using intermed_delta_table t2 on t1.id = t2.id WHEN matched then update set value = t2.value WHEN not matched then INSERT (id, value) values (t2.id, t2.value); Limitations of Using Custom Scripts for Postgres to Snowflake Connection Here are some of the limitations associated with the use of custom scripts to connect Postgres to Snowflake. Complexity This method necessitates a solid grasp of PostgreSQL and Snowflake, including their respective data types, SQL syntax, and file-handling features. Some may find this to be a challenging learning curve because not all may have substantial familiarity with SQL or database management. Time-consuming It can take a while to write scripts and troubleshoot any problems that may occur, particularly with larger databases or more intricate data structures. Error-prone In human scripting, mistakes can happen. A small error in the script might result in inaccurate or corrupted data. No Direct Support You cannot contact a specialized support team in the event that you run into issues. For help with any problems, you’ll have to rely on the manuals, community forums, or internal knowledge. Scalability Issues The scripts may need to be modified or optimized to handle larger datasets as the volume of data increases. Without substantial efforts, this strategy might not scale effectively. Inefficiency with Large Datasets It might not be the most effective method to move big datasets by exporting them to a file and then importing them again, especially if network bandwidth is limited. Methods of direct data transmission could be quicker. Postgres to Snowflake Data Replication Use Cases Let’s look into some use cases of Postgres-Snowflake replication. Transferring Postgres data to Snowflake Are you feeling constrained by your Postgres configuration on-premises? Transfer your data to Snowflake’s endlessly scalable cloud platform with ease. Take advantage of easy performance enhancements, cost-effectiveness, and the capacity to manage large datasets. Data Warehousing Integrate data into Snowflake’s robust data warehouse from a variety of sources, including Postgres. This can help uncover hidden patterns, get a better understanding of your company, and strengthen strategic decision-making. Advanced Analytics Utilize Snowflake’s quick processing to run complex queries and find minute patterns in your Postgres data. This can help you stay ahead of the curve, produce smart reports, and gain deeper insights. Artificial Intelligence and Machine Learning Integrate your Postgres data seamlessly with Snowflake’s machine-learning environment. As a result, you can develop robust models, provide forecasts, and streamline processes to lead your company toward data-driven innovation. Collaboration and Data Sharing Colleagues and partners can securely access your Postgres data within the collaborative Snowflake environment. Hence, this integration helps promote smooth communication and expedite decision-making and group achievement. Backup and Disaster Recovery Transfer your Postgres data to the dependable and safe cloud environment offered by Snowflake. You can be assured that your data is constantly accessible and backed up, guaranteeing company continuity even in the event of unanticipated events. Before wrapping up, let’s cover some basics. What is Postgres? Postgres is an open-source Relational Database Management System (RDBMS) developed at the University of California, Berkeley. It is widely known for reliability, feature robustness, and performance, and has been in use for over 20 years. Postgres supports not only object-relational data but also supports complex structures and a wide variety of user-defined data types. This gives Postgres a definitive edge over other open-source SQL databases like MySQL, MariaDB, and Firebird. Businesses rely on Postgres as their primary data storage/data warehouse for online, mobile, geospatial, and analytics applications. Postgres runs on all major operating systems, including Linux, UNIX (AIX, BSD, HP-UX, SGI IRIX, Mac OS X, Solaris, Tru64), and Windows. What is Snowflake? Snowflake is a fully-managed Cloud-based Data Warehouse that helps businesses modernize their analytics strategy. Snowflake can query both structured and unstructured data using standard SQL. It delivers results of user queries spanning Gigabytes and Petabytes of data in seconds. Snowflake automatically harnesses thousands of CPU cores to quickly execute queries for you. You can even query streaming data from your web, mobile apps, or IoT devices in real-time. Snowflake comes with a web-based UI, a command-line tool, and APIs with client libraries that makes interacting with Snowflake pretty simple. Snowflake is secure and meets the most secure regulatory standards such as HIPAA, FedRAMP, and PCI DSS. When you store your data in Snowflake, your data is encrypted in transit and at rest by default and it’s automatically replicated, restored, and backed up to ensure business continuity. Additional Resources for PostgreSQL Integrations and Migrations PostgreSQL to Oracle Migration Connect PostgreSQL to MongoDB Connect PostgreSQL to Redshift Integrate Postgresql to Databricks Export a PostgreSQL Table to a CSV File Conclusion High-performing Data Warehouse solutions like Snowflake are getting more adoption and are becoming an integral part of a modern analytics pipeline. Migrating data from various data sources to this kind of cloud-native solution i.e from Postgres to Snowflake, requires expertise in the cloud, data security, and many other things like metadata management. If you are looking for an ETL tool that facilitates the automatic migration and transformation of data from Postgres to Snowflake, then LIKE.TG is the right choice for you. LIKE.TG is a No-code Data Pipeline. It supports pre-built integration from 150+ data sources at a reasonable price. With LIKE.TG , you can perfect, modify and enrich your data conveniently. visit our website to explore LIKE.TG [/LIKE.TG Button] SIGN UP for a 14-day free trial and see the difference! Have any further queries about PostgreSQL to Snowflake? Get in touch with us in the comments section below.
 How To Set up SQL Server to Snowflake in 4 Easy Methods
How To Set up SQL Server to Snowflake in 4 Easy Methods
Snowflake is great if you have big data needs. It offers scalable computing and limitless size in a traditional SQL and Data Warehouse setting. If you have a relatively small dataset or low concurrency/load then you won’t see the benefits of Snowflake.Simply put, Snowflake has a friendly UI, and unlimited storage capacity, along with the control, security, and performance you’d expect for a Data Warehouse, something SQL Server is not. Snowflake’s unique Cloud Architecture enables unlimited scale and concurrency without resource contention, the ‘Holy Grail’ of Data Warehousing. One of the biggest challenges of migrating data from SQL server to Snowflake is choosing from all the different options available. This blog post covers the detailed steps of 4 methods that you need to follow for SQL Server to Snowflake migration. Read along and decide, which method suits you the best! What is MS SQL Server? Microsoft SQL Server (MS SQL Server) is a relational database management system (RDBMS) developed by Microsoft. It is used to store and retrieve data as requested by other software applications, which may run either on the same computer or on another computer across a network. MS SQL Server is designed to handle a wide range of data management tasks and supports various transaction processing, business intelligence, and analytics applications. Key Features of SQL Server: Scalability: Supports huge databases and multiple concurrent users. High Availability: Features include Always On and Failover clustering. Security: Tight security through solid encryption, auditing, row-level security. Performance: High-Speed in-memory OLTP and Columnstore indexes Integration: Integrates very well with other Microsoft services and Third-Party Tools Data Tools: In-Depth tools for ETL, reporting, data analysis Cloud Integration: Comparatively much easier to integrate with Azure services Management: SQL Server Management Studio for the management of Databases Backup and Recovery: Automated Backups, Point-in-Time Restore. TSQL: Robust Transact-SQL in complex queries and stored procedures. What is Snowflake? Snowflake is a cloud-based data warehousing platform that is designed to handle large-scale data storage, processing, and analytics. It stands out due to its architecture, which separates compute, storage, and services, offering flexibility, scalability, and performance improvements over traditional data warehouses. Key Features of Snowflake: Scalability: Seamless scaling of storage and compute independently. Performance: Fast query performance with automatic optimization. Data Sharing: Secure and easy data sharing across organizations. Multi-Cloud: Operates on AWS, Azure, and Google Cloud. Security: Comprehensive security features including encryption and role-based access. Zero Maintenance: Fully managed with automatic updates and maintenance. Data Integration: Supports diverse data formats and ETL tools. Load your data from MS SQL Server to SnowflakeGet a DemoTry itLoad your data from Salesforce to SnowflakeGet a DemoTry itLoad your data from MongoDB to SnowflakeGet a DemoTry it Methods to Connect SQL Server to Snowflake The following 4 methods can be used to transfer data from Microsoft SQL server to Snowflake easily: Method 1: Using SnowSQL to connect SQL server to Snowflake Method 2: Using Custom ETL Scripts to connect SQL Server to Snowflake Method 3: Using LIKE.TG Data to connect Microsoft SQL Server to Snowflake Method 4: SQL Server to Snowflake Using Snowpipe Method 1: Using SnowSQL to Connect Microsoft SQL Server to Snowflake To migrate data from Microsoft SQL Server to Snowflake, you must perform the following steps: Step 1: Export data from SQL server using SQL Server Management Studio Step 2: Upload the CSV file to an Amazon S3 Bucket using the web console Step 3: Upload data to Snowflake From S3 Step 1: Export Data from SQL Server Using SQL Server Management Studio SQL Server Management Studio is a data management and administration software application that launched with SQL Server. You will use it to extract data from a SQL database and export it to CSV format. The steps to achieve this are: Install SQL Server Management Studio if you don’t have it on your local machine. Launch the SQL Server Management Studio and connect to your SQL Server. From the Object Explorer window, select the database you want to export and right-click on the context menu in the Tasks sub-menu and choose the Export data option to export table data in CSV. The SQL Server Import and Export Wizard welcome window will pop up. At this point, you need to select the Data source you want to copy from the drop-down menu. After that, you need to select SQL Server Native Client 11.0 as the data source. Select an SQL Server instance from the drop-down input box. Under Authentication, select “Use Windows Authentication”. Just below that, you get a Database drop-down box, and from here you select the database from which data will be copied. Once you’re done filling out all the inputs, click on the Next button. The next window is the Choose a Destination window. Under the destination drop-down box, select the Flat File Destination for copying data from SQL Server to CSV. Under File name, select the CSV file that you want to write to and click on the Next button. In the next screen, select Copy data from one or more tables or views and click Next to proceed. A “Configure Flat File Destination” screen will appear, and here you are going to select the table from the Source table or view. This action will export the data to the CSV file. Click Next to continue. You don’t want to change anything on the Save and Run Package window so just click Next. The next window is the Complete Wizard window which shows a list of choices that you have selected during the exporting process. Counter-check everything and if everything checks out, click the Finish button to begin exporting your SQL database to CSV. The final window shows you whether the exporting process was successful or not. If the exporting process is finished successfully, you will see a similar output to what’s shown below. Step 2: Upload the CSV File to an Amazon S3 Bucket Using the Web Console After completing the exporting process to your local machine, the next step in the data transfer process from SQL Server to Snowflake is to transfer the CSV file to Amazon S3. Steps to upload a CSV file to Amazon S3: Start by creating a storage bucket. Go to the AWS S3 Console Click the Create Bucket button and enter a unique name for your bucket on the form. Choose the AWS Region where you’d like to store your data. Create a new S3 bucket. Create the directory that will hold your CSV file. In the Buckets pane, click on the name of the bucket that you created. Click on the Actions button, and select the Create Folder option. Enter a unique name for your new folder and click Create. Upload the CSV file to your S3 bucket. Select the folder you’ve just created in the previous step. Select Files wizard and then click on the Add Files button in the upload section. Next, a file selection dialog box will open. Here you will select the CSV file you exported earlier and then click Open. Click on the Start Upload button and you are done! Move your SQL Server Data to Snowflake using LIKE.TG Start for Free Now Step 3: Upload Data to Snowflake From S3 Since you already have an Amazon Web Services (AWS) account and you are storing your data files in an S3 bucket, you can leverage your existing bucket and folder paths for bulk loading into Snowflake. To allow Snowflake to read data from and write data to an Amazon S3 bucket, you first need to configure a storage integration object to delegate authentication responsibility for external cloud storage to a Snowflake identity and access management (IAM) entity. Step 3.1: Define Read-Write Access Permissions for the AWS S3 Bucket Allow the following actions: “s3:PutObject” “s3:GetObject” “s3:GetObjectVersion” “s3:DeleteObject” “s3:DeleteObjectVersion” “s3:ListBucket” The following sample policy grants read-write access to objects in your S3 bucket. { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowListingOfUserFolder", "Action": [ "s3:ListBucket", "s3:GetBucketLocation" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::bucket_name" ] }, { "Sid": "HomeDirObjectAccess", "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObjectVersion", "s3:DeleteObject", "s3:GetObjectVersion" ], "Resource": "arn:aws:s3:::bucket_name/*" } ] } For a detailed explanation of how to grant access to your S3 bucket, check out this link. Step 3.2: Create an AWS IAM Role and record your IAM Role ARN value located on the role summary page because we are going to need it later on. Step 3.3: Create a cloud storage integration using the STORAGE INTEGRATION command. CREATE STORAGE INTEGRATION <integration_name> TYPE = EXTERNAL_STAGE STORAGE_PROVIDER = S3 ENABLED = TRUE STORAGE_AWS_ROLE_ARN = '<iam_role>' STORAGE_ALLOWED_LOCATIONS = ('s3://<bucket>/<path>/', 's3://<bucket>/<path>/') [ STORAGE_BLOCKED_LOCATIONS = ('s3://<bucket>/<path>/', 's3://<bucket>/<path>/') ] Where: <integration_name> is the name of the new integration. <iam_role is> the Amazon Resource Name (ARN) of the role you just created. <bucket> is the name of an S3 bucket that stores your data files. <path> is an optional path that can be used to provide granular control over objects in the bucket. Step 3.4: Recover the AWS IAM User for your Snowflake Account Execute the DESCRIBE INTEGRATION command to retrieve the ARN for the AWS IAM user that was created automatically for your Snowflake account:DESC INTEGRATION <integration_name>; Record the following values: Step 3.5: Grant the IAM User Permissions to Access Bucket Objects Log into the AWS Management Console and from the console dashboard, select IAM. Navigate to the left-hand navigation pane and select Roles and choose your IAM Role. Select Trust Relationships followed by Edit Trust Relationship. Modify the policy document with the IAM_USER_ARNand STORAGE_AWS_EXTERNAL_ID output values you recorded in the previous step. { "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "AWS": "<IAM_USER_ARN>" }, "Action": "sts:AssumeRole", "Condition": { "StringEquals": { "sts:ExternalId": "<STORAGE_AWS_EXTERNAL_ID>" } } } ] } Click the Update Trust Policy button to save the changes. Step 3.6: Create an External Stage that references the storage integration you created grant create stage on schema public to role <IAM_ROLE>; grant usage on integration s3_int to role <IAM_ROLE>; use schema mydb.public; create stage my_s3_stage storage_integration = s3_int url = 's3://bucket1/path1' file_format = my_csv_format; Step 3.7: Execute COPY INTO <table> SQL command to load data from your staged files into the target table using the Snowflake client, SnowSQL. Seeing that we have already configured an AWS IAM role with the required policies and permissions to access your external S3 bucket, we have already created an S3 stage. Now that we have a stage built in Snowflake pulling this data into your tables will be extremely simple. copy into mytable from s3://mybucket credentials=(aws_key_id='$AWS_ACCESS_KEY_ID' aws_secret_key='$AWS_SECRET_ACCESS_KEY') file_format = (type = csv field_delimiter = '|' skip_header = 1); This SQL command loads data from all files in the S3 bucket to your Snowflake Warehouse. SQL Server to Snowflake: Limitations and Challenges of Using Custom Code Method The above method of connecting SQL Server to Snowflake comes along with the following limitations: This method is only intended for files that do not exceed 160GB. Anything above that will require you to use the Amazon S3 REST API. This method doesn’t support real-time data streaming from SQL Server into your Snowflake DW. If your organization has a use case for Change Data Capture (CDC), then you could create a data pipeline using Snowpipe. Also, although this is one of the most popular methods of connecting SQL Server to Snowflake, there are a lot of steps that you need to get right to achieve a seamless migration. Some of you might even go as far as to consider this approach to be cumbersome and error-prone. Method 2: Using Custom ETL Scripts Custom ETL scripts are programs that extract, transform, and load data from SQL Server to Snowflake. They require coding skills and knowledge of both databases. To use custom ETL scripts, you need to: 1. Install the Snowflake ODBC driver or a client library for your language (e.g., Python, Java, etc.). 2. Get the connection details for Snowflake (e.g., account name, username, password, warehouse, database, schema, etc.). 3. Choose a language and set up the libraries to interact with SQL Server and Snowflake. 4. Write a SQL query to extract the data you want from SQL Server. Use this query in your script to pull the data. Drawbacks of Utilizing ETL Scripts While employing custom ETL scripts to transfer data from SQL Server to Snowflake offers advantages, it also presents potential drawbacks: Complexity and Maintenance Burden: Custom scripts demand more resources for development, testing, and upkeep compared to user-friendly ETL tools, particularly as data sources or requirements evolve. Limited Scalability: Custom scripts may struggle to efficiently handle large data volumes or intricate transformations, potentially resulting in performance challenges unlike specialized ETL tools. Security Risks: Managing credentials and sensitive data within scripts requires meticulous attention to security. Storing passwords directly within scripts can pose significant security vulnerabilities if not adequately safeguarded. Minimal Monitoring and Logging Capabilities: Custom scripts may lack advanced monitoring and logging features, necessitating additional development effort to establish comprehensive tracking mechanisms. Extended Development Duration: Developing custom scripts often takes longer compared to configuring ETL processes within visual tools. Method 3: Using LIKE.TG Data to Connect SQL Server to Snowflake LIKE.TG is the only real-time ELT No-code Data Pipeline platform that cost-effectively automates flexible data pipelines to your needs. With integration with 150+ Data Sources (40+ free sources), we help you not only export data from sources & load data to the destinations but also transform & enrich your data, & make it analysis-ready. The following steps are required to connect Microsoft SQL Server to Snowflake using LIKE.TG ’s Data Pipeline: Step 1: Connect to your Microsoft SQL Server source. Click PIPELINES in the Navigation Bar. Click + CREATE in the Pipelines List View. Select SQL Server as your source. In the Configure your SQL Server Source page, specify the following: You can read more about using SQL server as a source connector for LIKE.TG here. Step 2: Configure your Snowflake Data Warehouse as Destination Click DESTINATIONS in the Navigation Bar. Click + CREATE in the Destinations List View. In the Add Destination page, select Snowflake as the Destination type. In the Configure your Snowflake Warehouse page, specify the following: This is how simple it can be to load data from SQL Server to Snowflake using LIKE.TG . Method 4: SQL Server to Snowflake Using Snowpipe Snowpipe is a feature of Snowflake that allows you to load data from external sources into Snowflake tables automatically and continuously. Here are the steps involved in this method: 1. Create an external stage in Snowflake that points to an S3 bucket where you will store the CSV file. 2. Create an external stage in Snowflake that points to an S3 bucket where you will store the CSV file. 3. Create a pipe in Snowflake that copies data from the external stage to the table. Enable auto-ingest and specify the file format as CSV. 4. Enable Snowpipe with the below command ALTER ACCOUNT SET PIPE_EXECUTION_PAUSED = FALSE; 5. Install the Snowpipe JDBC driver on your local machine and create a batch file to export data from SQL Server to CSV File. 6. Schedule the batch file to run regularly using a tool like Windows Task Scheduler or Cron. Check out this documentation for more details. Drawbacks of Snowpipe Method Here are some key limitations of using Snowpipe for data migration from SQL Server to Snowflake: File Size Restrictions: Snowflake imposes a per-file size limit for direct ingestion (around 160GB). Files exceeding this necessitate additional steps like splitting them or using the S3 REST API, adding complexity. Real-Time/CDC Challenges: Snowpipe is ideal for micro-batches and near real-time ingestion. But, it isn’t built for true real-time continuous data capture (CDC) of every single change happening in your SQL Server. Error Handling: Error handling for failed file loads through Snowpipe can become a bit nuanced. You need to configure options like ON_ERROR = CONTINUE in your COPY INTO statements to prevent individual file failures from stopping the entire load process. Transformation Limitations: Snowpipe primarily handles loading data into Snowflake. For complex transformations during the migration process, you may need a separate ETL/ELT tool to work with the Snowpipe-loaded data within Snowflake. Why migrate data from MS SQL Server to Snowflake? Enhanced Scalability and Elasticity: MSSQL Server, while scalable, often requires manual infrastructure provisioning for scaling compute resources. Snowflake’s cloud-based architecture offers elastic scaling, allowing you to easily adjust compute power up or down based on workload demands. You only pay for the resources you use, leading to potentially significant cost savings. Reduced Operational Burden: Managing and maintaining on-premises infrastructure associated with MSSQL Server can be resource-intensive. Snowflake handles all infrastructure management, freeing up your IT team to focus on core data initiatives. Performance and Concurrency: Snowflake’s architecture is designed to handle high concurrency and provide fast query performance, making it suitable for demanding analytical workloads and large-scale data processing. Additional Resources on SQL Server to Snowflake Explore more about Loading Data to Snowflake Conclusion The article introduced you to how to migrate data from SQL server to Snowflake. It also provided a step-by-step guide of 4 methods using which you can connect your Microsoft SQL Server to Snowflake easily. The article also talked about the limitations and benefits associated with these methods. The manual method using SnowSQL works fine when it comes to transferring data from Microsoft SQL Server to Snowflake, but there are still numerous limitations to it. FAQ on SQL Server to Snowflake Can you connect SQL Server to Snowflake? Connecting the SQL server to Snowflake is a straightforward process. You can do this using ODBC drivers or through automated platforms like LIKE.TG , making the task more manageable. How to migrate data from SQL to Snowflake? To migrate your data from SQL to Snowflake using the following methods:Method 1: Using SnowSQL to connect the SQL server to SnowflakeMethod 2: Using Custom ETL Scripts to connect SQL Server to SnowflakeMethod 3: Using LIKE.TG Data to connect Microsoft SQL Server to SnowflakeMethod 4: SQL Server to Snowflake Using Snowpipe Why move from SQL Server to Snowflake? We need to move from SQL Server to Snowflake because it provides:1. Enhanced scalability and elasticity.2. Reduced operational burden.3. High concurrency and fast query performance. Can SQL be used for snowflakes? Yes, snowflake provides a variant called Snowflake SQL which is ANSI SQL-compliant. What are your thoughts about the different approaches to moving data from Microsoft SQL Server to Snowflake? Let us know in the comments.
 How to Sync Data from MongoDB to PostgreSQL: 2 Easy Methods
How to Sync Data from MongoDB to PostgreSQL: 2 Easy Methods
When it comes to migrating data from MongoDB to PostgreSQL, I’ve had my fair share of trying different methods and even making rookie mistakes, only to learn from them. The migration process can be relatively smooth if you have the right approach, and in this blog, I’m excited to share my tried-and-true methods with you to move your data from MongoDB to PostgreSQL. In this blog, I’ll walk you through three easy methods: two automated methods for a faster and simpler approach and one manual method for more granular control. Choose the one that works for you. Let’s begin! What is MongoDB? MongoDB is a modern, document-oriented NoSQL database designed to handle large amounts of rapidly changing, semi-structured data. Unlike traditional relational databases that store data in rigid tables, MongoDB uses flexible JSON-like documents with dynamic schemas, making it an ideal choice for agile development teams building highly scalable and available internet applications. At its core, MongoDB features a distributed, horizontally scalable architecture that allows it to scale out across multiple servers as data volumes grow easily. Data is stored in flexible, self-describing documents instead of rigid tables, enabling faster iteration of application code. What is PostgreSQL? PostgreSQL is a powerful, open-source object-relational database system that has been actively developed for over 35 years. It combines SQL capabilities with advanced features to store and scale complex data workloads safely. One of PostgreSQL’s core strengths is its proven architecture focused on reliability, data integrity, and robust functionality. It runs on all major operating systems, has been ACID-compliant since 2001, and offers powerful extensions like the popular PostGIS for geospatial data. Differences between MongoDB & PostgreSQL & Reasons to Sync I have found that MongoDB is a distributed database that excels in handling modern transactional and analytical applications, particularly for rapidly changing and multi-structured data. On the other hand, PostgreSQL is an SQL database that provides all the features I need from a relational database. Differences Data Model: MongoDB uses a document-oriented data model, but PostgreSQL uses a table-based relational model. Query Language: MongoDB uses query syntax, but PostgreSQL uses SQL. Scaling: MongoDB scales horizontally through sharding, but PostgreSQL scales vertically on powerful hardware. Community Support: PostgreSQL has a large, mature community support, but MongoDB’s is still growing. Reasons to migrate from MongoDB to PostgreSQL: Better for larger data volumes: While MongoDB works well for smaller data volumes, PostgreSQL can handle larger amounts of data more efficiently with its powerful SQL engine and indexing capabilities. SQL and strict schema: If you need to leverage SQL or require a stricter schema, PostgreSQL’s relational approach with defined schemas may be preferable to MongoDB’s schemaless flexibility. Transactions: PostgreSQL offers full ACID compliance for transactions, MongoDB has limited support for multi-document transactions. Established solution: PostgreSQL has been around longer and has an extensive community knowledge base, tried and tested enterprise use cases, and a richer history of handling business-critical workloads. Cost and performance: For large data volumes, PostgreSQL’s performance as an established RDBMS can outweigh the overhead of MongoDB’s flexible document model, especially when planning for future growth. Integration: If you need to integrate your database with other systems that primarily work with SQL-based databases, PostgreSQL’s SQL support makes integration simpler. Move your Data from MongoDB to PostgreSQLGet a DemoTry itMove your Data from MySQL to PostgreSQLGet a DemoTry itMove your Data from Salesforce to PostgreSQLGet a DemoTry it MongoDB to PostgreSQL: 2 Migration Approaches Method 1: How to Migrate Data from MongoDB to PostgreSQL Manually? To manually transfer data from MongoDB to PostgreSQL, I’ll follow a straightforward ETL (Extract, Transform, Load) approach. Here’s how I do it: Prerequisites and Configurations MongoDB Version: For this demo, I am using MongoDB version 4.4. PostgreSQL Version: Ensure you have PostgreSQL version 12 or higher installed. MongoDB and PostgreSQL Installation: Both databases should be installed and running on your system. Command Line Access: Make sure you have access to the command line or terminal on your system. CSV File Path: Ensure the CSV file path specified in the COPY command is accurate and accessible from PostgreSQL. Step 1: Extract the Data from MongoDB First, I use the mongoexport utility to export data from MongoDB. I ensure that the exported data is in CSV file format. Here’s the command I run from a terminal: mongoexport --host localhost --db bookdb --collection books --type=csv --out books.csv --fields name,author,country,genre This command will generate a CSV file named books.csv. It assumes that I have a MongoDB database named bookdb with a book collection and the specified fields. Step 2: Create the PostgreSQL Table Next, I create a table in PostgreSQL that mirrors the structure of the data in the CSV file. Here’s the SQL statement I use to create a corresponding table: CREATE TABLE books ( id SERIAL PRIMARY KEY, name VARCHAR NOT NULL, position VARCHAR NOT NULL, country VARCHAR NOT NULL, specialization VARCHAR NOT NULL ); This table structure matches the fields exported from MongoDB. Step 3: Load the Data into PostgreSQL Finally, I use the PostgreSQL COPY command to import the data from the CSV file into the newly created table. Here’s the command I run: COPY books(name,author,country,genre) FROM 'C:/path/to/books.csv' DELIMITER ',' CSV HEADER; This command loads the data into the PostgreSQL books table, matching the CSV header fields to the table columns. Pros and Cons of the Manual Method Pros: It’s easy to perform migrations for small data sets. I can use the existing tools provided by both databases without relying on external software. Cons: The manual nature of the process can introduce errors. For large migrations with multiple collections, this process can become cumbersome quickly. It requires expertise to manage effectively, especially as the complexity of the requirements increases. Integrate MongoDB to PostgreSQL in minutes.Get your free trial right away! Method 2: How to Migrate Data from MongoDB to PostgreSQL using LIKE.TG Data As someone who has leveraged LIKE.TG Data for migrating between MongoDB and PostgreSQL, I can attest to its efficiency as a no-code ELT platform. What stands out for me is the seamless integration with transformation capabilities and auto schema mapping. Let me walk you through the easy 2-step process: a. Configure MongoDB as your Source: Connect your MongoDB account to LIKE.TG ’s platform by configuring MongoDB as a source connector. LIKE.TG provides an in-built MongoDB integration that allows you to set up the connection quickly. Set PostgreSQL as your Destination: Select PostgreSQL as your destination. Here, you need to provide necessary details like database host, user and password. You have successfully synced your data between MongoDB and PostgreSQL. It is that easy! I would choose LIKE.TG Data for migrating data from MongoDB to PostgreSQL because it simplifies the process, ensuring seamless integration and reducing the risk of errors. With LIKE.TG Data, I can easily migrate my data, saving time and effort while maintaining data integrity and accuracy. Additional Resources on MongoDB to PostgreSQL Sync Data from PostgreSQL to MongoDB What’s your pick? When deciding how to migrate your data from MongoDB to PostgreSQL, the choice largely depends on your specific needs, technical expertise, and project scale. Manual Method: If you prefer granular control over the migration process and are dealing with smaller datasets, the manual ETL approach is a solid choice. This method allows you to manage every step of the migration, ensuring that each aspect is tailored to your requirements. LIKE.TG Data: If simplicity and efficiency are your top priorities, LIKE.TG Data’s no-code platform is perfect. With its seamless integration, automated schema mapping, and real-time transformation features, LIKE.TG Data offers a hassle-free migration experience, saving you time and reducing the risk of errors. FAQ on MongoDB to PostgreSQL How to convert MongoDB to Postgres? Step 1: Extract Data from MongoDB using mongoexport Command.Step 2: Create a Product Table in PostgreSQL to Add the Incoming Data.Step 3: Load the Exported CSV from MongoDB to PostgreSQL. Is Postgres better than MongoDB? Choosing between PostgreSQL and MongoDB depends on your specific use case and requirements How to sync MongoDB and PostgreSQL? Syncing data between MongoDB and PostgreSQL typically involves implementing an ETL process or using specialized tools like LIKE.TG , Stitch etc. How to transfer data from MongoDB to SQL? 1. Export Data from MongoDB2. Transform Data (if necessary)3. Import Data into SQL Database4. Handle Data Mapping
 How to Sync Data from PostgreSQL to Google Bigquery in 2 Easy Methods
How to Sync Data from PostgreSQL to Google Bigquery in 2 Easy Methods
Are you trying to derive deeper insights from PostgreSQL by moving the data into a Data Warehouse like Google BigQuery? Well, you have landed on the right article. Now, it has become easier to replicate data from PostgreSQL to BigQuery.This article will give you a brief overview of PostgreSQL and Google BigQuery. You will also get to know how you can set up your PostgreSQL to BigQuery integration using 2 methods. Moreover, the limitations in the case of the manual method will also be discussed in further sections. Read along to decide which method of connecting PostgreSQL to BigQuery is best for you. Introduction to PostgreSQL PostgreSQL, although primarily used as an OLTP Database, is one of the popular tools for analyzing data at scale. Its novel architecture, reliability at scale, robust feature set, and extensibility give it an advantage over other databases. Introduction to Google BigQuery Google BigQuery is a serverless, cost-effective, and highly scalable Data Warehousing platform with Machine Learning capabilities built-in. The Business Intelligence Engine is used to carry out its operations. It integrates speedy SQL queries with Google’s infrastructure’s processing capacity to manage business transactions, data from several databases, and access control restrictions for users seeing and querying data. BigQuery is used by several firms, including UPS, Twitter, and Dow Jones. BigQuery is used by UPS to predict the exact volume of packages for its various services. BigQuery is used by Twitter to help with ad updates and the combining of millions of data points per second. The following are the features offered by BigQuery for data privacy and protection of your data. These include: Encryption at rest Integration with Cloud Identity Network isolation Access Management for granular access control Methods to Set up PostgreSQL to BigQuery Integration For the scope of this blog, the main focus will be on Method 1 and detail the steps and challenges. Towards the end, you will also get to know about both methods, so that you have the right details to make a choice. Below are the 2 methods: Method 1: Using LIKE.TG Data to Set Up PostgreSQL to BigQuery Integration The steps to load data from PostgreSQL to BigQuery using LIKE.TG Data are as follows: Step 1: Connect your PostgreSQL account to LIKE.TG ’s platform. LIKE.TG has an in-built PostgreSQL Integration that connects to your account within minutes. Move Data from PostgreSQL to BigQueryGet a DemoTry itMove Data from Salesforce to BigQueryGet a DemoTry itMove Data from Google Ads to BigQueryGet a DemoTry itMove Data from MongoDB to BigQueryGet a DemoTry it The available ingestion modes are Logical Replication, Table, and Custom SQL. Additionally, the XMIN ingestion mode is available for Early Access. Logical Replication is the recommended ingestion mode and is selected by default. Step 2: Select Google BigQuery as your destination and start moving your data. With this, you have successfully set up Postgres to BigQuery replication using LIKE.TG Data. Here are more reasons to try LIKE.TG : Schema Management: LIKE.TG takes away the tedious task of schema management & automatically detects the schema of incoming data and maps it to the destination schema. Incremental Data Load: LIKE.TG allows the transfer of data that has been modified in real-time. This ensures efficient utilization of bandwidth on both ends. Data Transformation: It provides a simple interface to perfect, modify, and enrich the data you want to transfer. Method 2: Manual ETL Process to Set Up PostgreSQL to BigQuery Integration To execute the following steps, you need a pre-existing database and a table populated with PostgreSQL records. Let’s take a detailed look at each step. Step 1: Extract Data From PostgreSQL The data from PostgreSQL needs to be extracted and exported into a CSV file. To do that, write the following command in the PostgreSQL workbench. COPY your_table_name TO ‘new_file_location\new_file_name’ CSV HEADER After the data is successfully migrated to a CSV file, you should see the above message on your console. Step 2: Clean and Transform Data To upload the data to Google BigQuery, you need the tables and the data to be compatible with the bigQuery format. The following things need to be kept in mind while migrating data to bigQuery: BigQuery expects CSV data to be UTF-8 encoded. BigQuery doesn’t enforce Primary Key and unique key constraints. Your ETL process must do so. Postgres and BigQuery have different column types. However, most of them are convertible. The following table lists common data types and their equivalent conversion type in BigQuery. You can visit their official page to know more about BigQuery data types. DATE value must be a dash(-) separated and in the form YYYY-MM-DD (year-month-day). Fortunately, the default date format in Postgres is the same, YYYY-MM-DD.So if you are simply selecting date columns it should be the incorrect format. The TO_DATE function in PostgreSQL helps in converting string values into dates. If the data is stored as a string in the table for any reason, it can be converted while selecting data. Syntax : TO_DATE(str,format) Example : SELECT TO_DATE('31,12,1999','%d,%m,%Y'); Result : 1999-12-31 In TIMESTAMP type, the hh:mm:ss (hour-minute-second) portion must use a colon (:) separator. Similar to the Date type, the TO_TIMESTAMP function in PostgreSQL is used to convert strings into timestamps. Syntax : TO_TIMESTAMP(str,format) Example : SELECT TO_TIMESTAMP('2017-03-31 9:30:20','YYYY-MM-DD HH:MI:SS'); Result: 2017-03-31 09:30:20-07 Make sure text columns are quoted if they can potentially have delimiter characters. Step 3: Upload to Google Cloud Storage(GCS) bucket If you haven’t already, you need to create a storage bucket in Google Cloud for the next step 3. a) Go to your Google Cloud account and Select the Cloud Storage → Bucket. 3. b) Select a bucket from your existing list of buckets. If you do not have a previously existing bucket, you must create a new one. You can follow Google’s Official documentation to create a new bucket. 3. c) Upload your .csv file into the bucket by clicking the upload file option. Select the file that you want to upload. Step 4: Upload to BigQuery table from GCS 4. a) Go to the Google Cloud console and select BigQuery from the dropdown. Once you do so, a list of project IDs will appear. Select the Project ID you want to work with and select Create Dataset 4. b) Provide the configuration per your requirements and create the dataset. Your dataset should be successfully created after this process. 4. c) Next, you must create a table in this dataset. To do so, select the project ID where you had created the dataset and then select the dataset name that was just created. Then click on Create Table from the menu, which appears at the side. 4. d) To create a table, select the source as Google Cloud Storage. Next, select the correct GCS bucket with the .csv file. Then, select the file format that matches the GCS bucket. In your case, it should be in .csv file format. You must provide a table name for your table in the bigQuery database. Select the mapping option as automapping if you want to migrate the data as it is. 4. e) Your table should be created next and loaded with the same data from PostgreSQL. Step 5: Query the table in BigQuery After loading the table into bigQuery, you can query it by selecting the QUERY option above the table. You can query your table by writing basic SQL syntax. Note: Mention the correct project ID, dataset name, and table name. The above query extracts records from the emp table where the job is manager. Advantages of manually loading the data from PostgreSQL to BigQuery: Manual migration doesn’t require setting up and maintaining additional infrastructure, which can save on operational costs. Manual migration processes are straightforward and involve fewer components, reducing the complexity of the operation. You have complete control over each step of the migration process, allowing for customized data handling and immediate troubleshooting if issues arise. By manually managing data transfer, you can ensure compliance with specific security and privacy requirements that might be critical for your organization. Does PostgreSQL Work As a Data Warehouse? Yes, you can use PostgreSQL as a data warehouse. But, the main challenges are, A data engineer will have to build a data warehouse architecture on top of the existing design of PostgreSQL. To store and build models, you will need to create multiple interlinked databases. But, as PostgreSQL lacks the capability for advanced analytics and reporting, this will further limit the use of it. PostgreSQL can’t handle the data processing of huge data volume. Data warehouses have the features such as parallel processing for advanced queries which PostgreSQL lacks. This level of scalability and performance with minimal latency is not possible with the database. Limitations of the Manual Method: The manual migration process can be time-consuming, requiring significant effort to export, transform, and load data, especially if the dataset is large or complex. Manual processes are susceptible to human errors, such as incorrect data export settings, file handling mistakes, or misconfigurations during import. If the migration needs to be performed regularly or involves multiple tables and datasets, the repetitive nature of manual processes can lead to inefficiency and increased workload. Manual migrations can be resource-intensive, consuming significant computational and human resources, which could be utilized for other critical tasks. Additional Read – Migrate Data from Postgres to MySQL PostgreSQL to Oracle Migration Connect PostgreSQL to MongoDB Connect PostgreSQL to Redshift Replicate Postgres to Snowflake Conclusion Migrating data from PostgreSQL to BigQuery manually can be complex, but automated data pipeline tools can significantly simplify the process. We’ve discussed two methods for moving data from PostgreSQL to BigQuery: the manual process, which requires a lot of configuration and effort, and automated tools like LIKE.TG Data. Whether you choose a manual approach or leverage data pipeline tools like LIKE.TG Data, following the steps outlined in this guide will help ensure a successful migration. FAQ on PostgreSQL to BigQuery How do you transfer data from Postgres to BigQuery? To transfer data from PostgreSQL to BigQuery, export your PostgreSQL data to a format like CSV or JSON, then use BigQuery’s data import tools or APIs to load the data into BigQuery tables. Can I use PostgreSQL in BigQuery? No, BigQuery does not natively support PostgreSQL as a database engine. It is a separate service with its own architecture and SQL dialect optimized for large-scale analytics and data warehousing. Can PostgreSQL be used for Big Data? Yes, PostgreSQL can handle large datasets and complex queries effectively, making it suitable for big data applications. How do you migrate data from Postgres to Oracle? To migrate data from PostgreSQL to Oracle, use Oracle’s Data Pump utility or SQL Developer to export PostgreSQL data as SQL scripts or CSV files, then import them into Oracle using SQL Loader or SQL Developer.
相关产品推荐