Feed: SingleStore Blog.
Author: .
SingleStore is proud to count our customers as many of the world’s most innovative cyber security companies. Here’s a roundup of some of these industry leaders’ latest news and developments. We’ve also included highlights from our recent webinar with cyber solutions providers Twingo and Armis Security.
The war in Ukraine brings heightened cyber risk
Akamai noted a specific surge in European DDoS activity as tensions escalated, as reflected in its data since fall 2021. Total attacks in Europe, Middle East and Africa (EMEA) are up 220% over the average of the previous four years. The cybersecurity provider even had to execute an emergency DDoS protection for a new customer in Eastern Europe, impacted by increased DDoS attacker activity across the region.
The email had a Word document attached that contained a malicious JavaScript file that would download and install a payload known as SaintBot (a downloader) and OutSteel, a simple document stealer that searches for potentially sensitive documents based on their file type and uploads the files to a remote server.
How Armis Security prevented points of vulnerability in smart devices
Customer story: Armis saves 70% on data pipeline cost with SingleStore
SingleStoreDB powers more cyber security leaders
Nucleus Security puts SingleStoreDB at the heart of its vulnerability management (VM) solution, an all-in-one data aggregation and process automation platform for network, cloud and application security. The company needed an underlying database that was truly fast and scalable to power their platform — and as they expanded into the private sector, their existing database became a bottleneck in supporting real-time security needs.
Watch the webinar: Nucleus Security, Every Millisecond Counts for Cybersecurity
Twingo and Armis: Streamlining cybersecurity solutions through database consolidation
-
The Armis Unified Visibility & Security Platform, powered by SingleStoreDB, processes 100 billion events per day (traffic, asset, user data and more) for its global customer base, with 30TB data sets in its largest customer environments. This creates a full, dynamic picture on all of client assets, accessible within the product by free queries on devices, IP session data, predefined metrics and more, delivering 1.5-second query speed across three days’ worth of data.
-
Twingo represents, sells and deploys leading big data technologies. Experts in architectural design, Twingo helped Armis choose the right technology and provides the optimal big data solutions for complex problems. Twingo contributed to the POC for the SingleStoreDB deployment at Armis, helping design the data cluster sizing, redesign queries, and optimize the model, then define and run the POC. In production Armis now has 32 managed SingleStore units, and each unit consists of 8 CPU cores, 64GB RAM, and a 2TB SSD.
Here are some highlights from our conversation with Ilya Gulman, Twingo’s chief technology officer. Ilya’s comments below are translated from Hebrew.
In the beginning
“Twenty years ago, all we had was two types of major databases: Oracle and SQL. In the twenty years since then, many different database technologies popped up. The database world became overpopulated — each database handles specific solutions.”
“Hiring a database person today is complicated since the person must know a lot. The most important part of our job is requirements. Each database has its own specialty. Another issue is how many times and places we need to save data, and we will get there shortly. A key approach would be to know how to approach the data. Are we going to approach it as a document store or just text search? And if it is a combination of all, then we need a Swiss Army knife approach. Is the data updatable? Quite important since quite a few databases are not updatable.”
The problem with modern architectures
“I used Amazon’s [architecture]. I would like to do a guided read. We have events occurring in the database on the DB, and write them to Kafka. This is a very important step in the process. There are islands of information within the process. The more islands you have, the more complex the situation becomes.”
Ilya describes four problems with the architecture. “The first problem is hiring people well-versed in multiple technologies. The main pitfalls we see with the multiple technologies is if people don’t have the expertise, we’ll run into problems.”
“The second issue is the fact that we have multiple datastores, and there is always some sort of discrepancy between data stores — or rather, inconsistencies. When those systems work for years and years, they tend to lose information.”
“The third problem is the lack of ‘self healing.’ This is something most databases have on their own, and there’s a feature built in that allows it to return to ‘base.’ For example if it loses partitioning, it will attempt to find the partitioning. While these databases (like Redshift and Athena) have this feature, when you connect the databases together, the architecture doesn’t have self-healing out of the box.”
“You can overcome this by ignoring self-healing and working with what you have. Or, some organizations build their own self-healing protocol — which is an incredible amount of work since you have to identify what breaks, and should be fixed.”
“The last problem we encounter with this architecture is because there are so many connectors, we run into needing to join data stores. You might have a use case (Elastic) that needs a text search, followed by analytical joins (which are done in Amazon Redshift). And we want to take both and connect — do both A & B. But this really isn’t easy. If you want to do this online, it’s nearly impossible.”
Can we simplify?
“If you look at the architecture, our goal is switching out what we currently have (Elastic, Redshift, Redis), and in its place swap one database that did everything the others did. It can do analytics, text search, key-value store, and on and on and on.”
“And because it’s all in one place, you don’t need to worry about multiple joins, you don’t need different skills, and ultimately the system has a self-healing process and will eliminate inconsistencies.”
“If your use case supports it, the alternative to [several databases] is to do a consolidation. I’d like to say two things regarding SingleStore:
-
“We can use the SingleStore design to scale out, expand the number of servers, add memory as needed, etc. SingleStore allows both analytical work as well as processing work (as shown in slide image) in a single engine.
-
“It’s also multi-model — you can work in a relational database, semi-structured database, index, full-text search, JSON, etc. SingleStore can keep data in memory for faster processing.”
In conclusion
“Ultimately, there are two ways you can go. The first is specialization. While specialized systems are good in their specific ways and solving specific problems, you’ll run into the pitfalls described earlier. There are also a lot of database combinations that have to happen together.”
“The second way is consolidation. This is much easier, but it’s important to note the special, niche functionalities could be missing (Ilya uses a text search example).