www.bortolotto.eu

Newsfeeds
Planet MySQL
Planet MySQL - https://planet.mysql.com

  • No More Silent Foreign Key Cascades: MySQL 9.7 Lets Child Triggers Speak Up
    MySQL 9.7 introduces a long-requested improvement: Child table triggers are executed during SQL-layer foreign key cascades. Historically, cascades executed inside InnoDB did not invoke child table triggers, which created gaps in auditing, derived data maintenance, and observability. When a parent row change triggered cascading changes in child tables, those child table triggers were not executed. This […]

  • Announcing Vitess 24
    Announcing Vitess 24 # The Vitess maintainers are happy to announce the release of version 24.0.0, along with version 2.17.0 of the Vitess Kubernetes Operator. Version 24.0.0 expands query serving capabilities for sharded keyspaces, modernizes Vitess's observability stack, and introduces faster replica provisioning through native MySQL CLONE support. The companion v2.17.0 operator release brings significant improvements to scheduled backups, with new cluster- and keyspace-level schedules that make production backup management much easier to configure at scale.

  • AI Is Raising the Bar for MySQL Database Security
    Best practices for MySQL customers and users in an AI-accelerated security landscape: A practical guide to hardening MySQL and the environment around it Oracle recently described how AI is transforming vulnerability detection and response. The latest generation of AI is increasing the speed and scale at which vulnerabilities can be identified and remediated. Oracle is […]

  • XtraBackup incremental prepare phase is 2x-3x faster!
    TL;DR Percona XtraBackup is a 100% open-source backup solution for Percona Server for MySQL and MySQL®. It is designed for high-availability environments, performing online, non-blocking, and highly secure backups of transactional systems without interrupting your production traffic. While full backups work for small databases, large-scale systems rely on incremental backups to save space and time. However, the “prepare” stage, required to make the incremental backups consistent, was slow because XtraBackup processed the .delta files serially. The .delta files are generated per table and store only the modifications since the last backup. Great news! In XtraBackup versions 8.0.35-33 and 8.4.0-3 and later, we’ve added support for the --parallel option during the prepare stage. This option lets XtraBackup process multiple .delta files simultaneously, significantly reducing the preparation time, especially when you have a large number of IBD files. Please add --parallel=X, with the number of threads to use, to the xtrabackup --prepare --apply-log-only command to speed up the incremental prepare operation. The Incremental Backup Workflow Before we dive into the performance gains, it’s important to understand how Incremental backups work. 1. Creating the Backups The process starts with a full backup followed by a backup that captures the changes since the last backup. This smaller backup is called an incremental backup. XtraBackup creates .delta files during incremental backups. Let’s review an example. Take Full Backup: Your starting point is Point A. This backup is an entire copy of your data. Take Inc1 Backup: XtraBackup identifies the changes between Point A and Point B. It creates a .delta file for every table that has been changed. Delta files contain only the pages that changed between the backups. Take Inc2 Backup: XtraBackup identifies the changes between Point B and Point C. It creates a new set of .delta files for this specific period. For more detailed steps/commands, please check the documentation here: https://docs.percona.com/percona-xtrabackup/8.0/create-incremental-backup.html 2. Preparing the Backups To restore the data to the latest point, you must merge these changes back into the full backup. The “prepare” phase works differently here: Prepare Inc1: You merge the Inc1 changes into the full backup using the --apply-log-only option. In this step, XtraBackup applies the .delta files and the redo logs, but does not apply the Undo logs Prepare Inc2: You merge the Inc2 changes into the updated base using the --apply-log-only option. XtraBackup applies the .delta files and the Redo logs but skips the Undo logs. Final Prepare: After all the incremental backups are merged, you run a final prepare command on the full backup. This final step applies the Undo logs to make the entire dataset consistent. If you apply the Undo logs during the intermediate steps, you cannot merge any further backups. More detailed steps to prepare an incremental backup are described here: https://docs.percona.com/percona-xtrabackup/8.0/prepare-incremental-backup.html The Improvement: Parallel Incremental Delta Apply We have improved the Incremental Delta Apply phase. These are “Prepare inc1” and “Prepare inc2” phases as described above. --parallel option should be used along with the --apply-log-only to apply the .delta files in parallel. We completed this essential improvement as part of [PXB-3427]. In previous versions, XtraBackup applied the .delta files as soon as a file was discovered in the incremental backup directory. Starting with versions 8.0.35-33 and 8.4.0-3, to apply the .delta files, XtraBackup scans the backup directory and builds a queue of delta files. Multiple threads (defined by --parallel ) consume this queue simultaneously. Each thread reads a .delta file and writes its pages to the corresponding InnoDB Data File (.ibd file). Benchmarks This benchmark is created using the scripts, and the instructions are in JIRA: PXB-3427 When your backup contains a large number of small .delta files, increasing the --parallel value can drastically reduce the time taken to prepare the incremental backup by distributing the high per-file overhead across more threads. However, for other categories with fewer or larger files, performance typically plateaus after 16 threads, and pushing higher can even lead to slight regressions due to thread management overhead. While there is no single “golden value” to recommend for every scenario, we recommend starting with a value of 8 to find the optimal balance for your specific environment. Disk Utilization with XtraBackup prepare using --parallel=1 vs --parallel=64 The PMM graphs below show the Disk IOPs used by the XtraBackup prepare command. The graph is generated when XtraBackup applies the incremental backup to a full backup directory. Incremental backup directory that has 20,608 .delta files, each of which is 2.5 MB. With --parallel=1 With --parallel=1, max Disk IOPs utilized is 18.2 K, and the XtraBackup prepare operation finished in 3.76 minutes. With --parallel=64   With --parallel=64, the max Disk Write IOPs utilized is 85K, and the XtraBackup prepare operation finished in around a minute. XtraBackup utilized 4.67x more disk IOPS and finished 3.49x faster. Results from the bug reporter We saw some amazing results shared by the reporter on PXB-3427.  The time required for XtraBackup prepare command (--prepare --apply-log-only)  to complete, reduced from 237 minutes to just 6 minutes. That’s an incredible 40X speed-up! Here are the details from their setup: Full backup: 235,188 *.ibd files Incremental backup: 236,214 *.ibd.delta files Average .delta size: 53,041 bytes (~53KB) Threads used: 48 (–parallel=48) Disk specs: 25K IOPS performance and an average of 500 to 600 MB/s of throughput We hear you! This specific feature came to us from a post on the community forum. We reached out, asked them to create a JIRA ticket, and then implemented the improvement. We wanted to share this story as a demonstration of our commitment to listening to and acting on community feedback! The post XtraBackup incremental prepare phase is 2x-3x faster! appeared first on Percona.

  • Orchestrator’s Next Chapter: What It Means for Percona Customers
    Last week, ProxySQL announced that they are taking over the maintenance and development of Orchestrator, the MySQL high-availability and topology management tool originally authored by Shlomi Noach. You can read their announcement here: Announcing the future of Orchestrator. We want to briefly share Percona’s position on the news. We welcome this Orchestrator became the de facto standard for MySQL topology management and automated failover, and it has been a foundational tool in the ecosystem for over a decade. When the upstream project was archived, many operators were left running internal forks. A revived project under active development, with a stated roadmap and continued Apache 2.0 licensing, is good news for the MySQL community, and we’re glad to see ProxySQL step up to take it on. Thanks are due to Shlomi Noach for creating Orchestrator in the first place, and to everyone who contributed to it over the years. A small clarification on Percona’s role The ProxySQL announcement kindly credited Percona alongside GitHub for “stewardship over the years.” To be accurate: Percona has never been a maintainer of the upstream Orchestrator project. What we have done, and will continue to do, is support our customers who rely on it. That includes operational guidance, troubleshooting, and carrying internal patches where a customer situation requires it. The upstream project itself has always lived with Shlomi and later with the team at GitHub. Nothing changes for Percona customers If you are a Percona customer running Orchestrator today, your support experience is unchanged. We will continue helping you operate it in production, diagnose issues, and plan around its role in your high-availability stack. That commitment is steady regardless of where the upstream project lives. Orchestrator’s maintenance also matters to us beyond support engagements. Percona Operator for MySQL uses Orchestrator to manage asynchronous topologies, so our own product depends on the project staying healthy. That’s part of why we plan to coordinate closely with the ProxySQL team as the next chapter unfolds. Coordinating with the ProxySQL team We plan to open coordination conversations with the ProxySQL team to make sure that operators running Orchestrator today, including our customers, have a smooth path as the project evolves. We wish the ProxySQL team well in this next chapter and look forward to supporting the community alongside them. If you’re a Percona customer, reach out to your account team with any questions about your Orchestrator deployment. If you’re running Orchestrator outside of a Percona engagement and want to talk through support options, get in touch with our MySQL team.   The post Orchestrator’s Next Chapter: What It Means for Percona Customers appeared first on Percona.