Feed: SingleStore Blog.
Author: .
Databases are frequently not included when transforming an application and engineering culture toward a DevOps, or continuous deployment engineering organization. And, you could be forgiven for this because setting up repeatable, fast and up-to-date database environments for an application is really hard!
By not including the database in the pipeline, most of the work related to database changes ends up being manual, with the associated costs and risks. This also:
-
Leads to a lack of traceability of database changes (changes history)
-
Prevents applying Continuous Integration (CI) and Continuous Delivery (CD) good practices to a full extent
-
Promotes fear of changes in the organization
What Are Technical Best Practices for Databases and DevOps?
First of all, let’s cover best practices for databases and DevOps. I think it’s important to cover since to begin with, databases were never part of the original DevOps vision. That means as a practice, there is a general lack of culture and well-established processes around building databases into your pipeline. Alright, let’s jump in.
Test!
No brainer. You should be testing your databases every time you push. You need to make sure the components that house your data (aka gold), will not compromise or lose any data by not testing your builds thoroughly. I have seen databases neglected when it comes to testing — and often, it comes down to the job of a single developer who manually tests and deploys each build. It doesn’t need to be like that… Test!
Developers need a way to easily create local databases
Right off the bat, it needs to be easy for everyone on the team to set up databases either locally, in a cloud sandbox environment or both! Here’s where containers come to the rescue. Containers are a good way to practice, they’re easy and cheap to set up, and most importantly, if something goes wrong you can throw everything out and start over again. Your team needs to easily develop in a non-shared environment to ensure everything is working correctly.
The database schema — including all indexes — need to be in source control
If developers need to create local builds of the database, that also means that all components that shape the database or control how it performs business logic need to be maintained using source control. Maintaining these changes can be simplified by making sure all changes are performed using migrations.
Practice in a production-like environment
Everyone on the team should be able to develop and test out database code in a production-like database environment before pushing out changes. Trust me, you would rather have one of your developers topple over a staging environment than the production environment. This environment should also be simple to take down, and set up again.
You need to test a change before applying it to a production environment. If the table data is huge — so huge that it would be costly to replicate it in a different environment from production — make sure you can at least simulate the change with a significant set of data. This will help ensure the change won’t take forever, and you won’t be blocking a table for a long period of time.
Be sure to monitor database systems for performance and compliance
Did you know you can automate this? Like any good CI/CD pipeline, all the important business logic should be thoroughly tested and automated. This ensures that any changes you make to your database environment won’t break the build, your user’s trust or the law. Be sure that you are taking into account regional differences and regulatory requirements.
Microservices are a good way to decouple the database
The only way other microservices interact with the data is by using the exposed methods from the service, rather than going directly to the database — even if it’s possible and “easier” to do it that way.
Let’s get into the code
Now that we’ve discussed why you should be automating your database deployments, let’s dig into a practical example. In this guide, I’ll be showing you a workflow example that automates a deployment of SingleStore using GitHub Actions every time you push up to GitHub. The workflow runs a script that connects to the SingleStore service, creates a table and populates it with data. To test that the workflow creates and populates the SingleStore table, the script prints the data from the table to the console. This example only shows you how to get it set up, but I would encourage you to add tests based on this configuration that fit the unique needs of your team, requirements and of course, your application.
SingleStore on GitHub Actions demo
Before we can start writing code, we need to make sure that your environment is set up and ready to go. Make sure you clone this git repository to your machine
git clone https://github.com/singlestore-labs/singlestore-and-github-actions-demo.git
Note: You can totally set up automated deployments using SingleStoreDB Cloud if you would like. In fact, it’s good practice to run tests in a similar environment as your production environment. You will need to set up new clusters and you point your GitHub config file to these database instances, instead of the local container images.
SINGLESTORE_LICENSE="paste your singlestore license here"
Finally, run your GitHub actions by pushing your code to GitHub!

And that is it! Now you can relax and commit your code and migrations as the deployment process is carried out automatically, every time you push your code to your repository.