Monday, September 12, 2011

A Glance at Real-Time Replication From MySQL To Oracle


Intro

The open source Tungsten Replicator is very powerful. It's exciting to see how a flexible initial design allows to push the boundaries of replication with each new release. The recently published System of Record approach for multi-master databases does exactly this for multi-master databases.

Nevertheless, in today's database environments it is often not enough to move data, however well it is done, between the nodes of a single DBMS vendor like MySQL.  There is an entirely new set of challenges that appear when Oracle joins the Enterprise room.

A snapshot from Lithuanian animated classic "Baubas" (eng. "Bugaboo")
Thus, this article presents how Tungsten helps to interconnect the whole database picture by replicating data from MySQL to other vendors. This time I'll specifically glance at replication to Oracle.

Background

One of my main development focuses for Tungsten Replicator is support for heterogeneous replication. Allow me to define the word for complete clarity:

Heterogeneous replication - replication across different DBMS types.

Currently, the following topologies are production ready:
  • MySQL -> Oracle
  • MySQL -> PostgreSQL
  • MySQL -> Greenplum
There are quite a few others waiting behind the curtains in POC stage too. What is really fun, that a lot of details provided below are valid for replication not only to Oracle. The basic recipe stays the same...

Motivation

Before we dive in, let's look at the motives behind heterogeneous replication. There are at least four major forces driving the interest:
  • Real-time integration of data between different DBMS types
  • Seamless migration out of one DBMS type to another
  • Data warehousing (real-time) from different DBMS types
  • Leveraging specific SQL power of other DBMS types
In other words, having software at your fingertips which can seamlessly (with some careful preparation though) move data from one DBMS type to another provides a lot more freedom in choosing the DBMS, which is best suitable for a particular task.

The Recipe

Here are general steps to replicate transactionally consistent data from MySQL to Oracle:
  1. Extract the last committed transaction from MySQL's binary log.
  2. Wrap the transaction with metadata, including Global Transaction ID.
  3. Filter (transform) it to conform to the taste of Oracle.
  4. Apply this transaction to Oracle.
  5. If succeeded, go to (1), if not - break the replication.
Yes, if your data is sensitive, it's better to break the replication if unexpected happens. Just like Data Charmer loves failing installations, transactional consistency loves failing replication. One can compare this with a car - if oil pressure indicator light comes up, you better stop. Unless, you're keen in seeing how far the engine can drive without oil before breaking irreversibly...

After the error appeared, you can investigate it, fix it and resume. That's where the step (2) comes handy - Tungsten Replicator has global transaction IDs in heterogeneous replication too and hence knows where it stopped and where to resume.

Transaction Flow

With that in mind, how does a transaction travel from MySQL through Tungsten Replicator to Oracle? There are couple of available topologies to choose from. It is possible to configure everything with (a) a single Replicator on the Oracle side, or (b) with two - one on MySQL and one on Oracle. Let's see how the latter setup looks like:


In this picture you can see all the mentioned components (Java classes) of the Replicator, including extractor, filters and applier. Let's drill down to each of them in a little more detail.

Extracting from MySQL

This is the entry point. Every transaction that is written by MySQL to the binary log is extracted by the MySQLExtractor class. At this point, one of the primary challenges I come across when developing heterogeneous replication is, of course, SQL dialect differences. Almost naturally, each vendor has different DDL semantics, but not only that - you cannot rely on DML statements to be standardized either. Luckily, for MySQL there is solution - use row based replication. This is a great way to get all new transactions as row change data, as opposed to textual SQL strings.

For example, if "DELETE * FROM t" is committed, which, effectively, deletes 3 rows, Tungsten's built-in Transaction History Log viewer will show the extracted transaction like this:
$ ./thl list -seqno 152 -service default
...
SEQ# = 152 / FRAG# = 0 (last frag)
- TIME = 2011-09-09 03:52:18.0
- EPOCH# = 93
- EVENTID = mysql-bin.000006:0000000000071708;0
- SOURCEID = sp.continuent.com
- METADATA = [mysql_server_id=103;service=default;shard=test]
- TYPE = com.continuent.tungsten.replicator.event.ReplDBMSEvent
- SQL(0) =
 - ACTION = DELETE
 - SCHEMA = test
 - TABLE = t
 - ROW# = 0
  - KEY(1: ) = 1
  - KEY(2: ) = a
  - KEY(3: ) = NULL
 - ROW# = 1
  - KEY(1: ) = 2
  - KEY(2: ) = b
  - KEY(3: ) = NULL
 - ROW# = 2
  - KEY(1: ) = 3
  - KEY(2: ) = c
  - KEY(3: ) = NULL
In other words, this is a very handy structure, which will be reconstructed into an Oracle-compatible SQL by the Oracle applier at the later stages. This, at the base level, covers the DML, but what to do with DDL statements?

Handling DDL

Currently there are three choices:
  1. Leave them as is: if DDL is compatible with Oracle, it is applied, if not - replication breaks and DBA can investigate it. Usually, this is a good idea, because you will know if someone is executing unexpected DDLs against the cluster.
  2. Transform DDL SQL into Oracle compatible one by the help of filters: if application generates some DDL statements by itself and they are possible to identify.
  3. Drop all DDL from the replication stream: useful during provisioning phase, when you're dumping a backup on the MySQL side and want data to be propagated to Oracle, which has the schema already prepared.
The application/schema upgrades are handled manually by a procedure like the following:
  1. Prepare DDL upgrade script for MySQL.
  2. Prepare DDL upgrade script for Oracle.
  3. Disconnect applications.
  4. Turn off binary logging on MySQL (SET SESSION SQL_LOG_BIN=0).
  5. Execute upgrade scripts against MySQL and Oracle.
  6. Turn the binary logging back on (SET SESSION SQL_LOG_BIN=1).
  7. Resume applications.
Filters

Before we send our first transaction to Oracle, it needs to get by through important transformations. This is a separate topic on its own, but here's a brief overview of filters at work:
  • EnumToStringFilter - MySQL saves ENUM column type values as indexes in the binary log; this filter maps the textual representation instead, so we'd save appropriate text values on the Oracle side, as opposed to meaningless numbers.
  • To upper case filter - MySQL table and column names are usually lower case; this filter transforms the case to what Oracle likes.
  • PrimaryKeyFilter - for UPDATE and DELETE row changes we strip down all the key-value pairs that are excessive in the WHERE clause, leaving only the primary key comparison.
  • OptimizeUpdatesFilter - just like MySQL binary log is excessive in the key (WHERE) part, it has also abundant information in the SET part of an UPDATE; eg. if a table has hundred columns, but statement updates only one, the row change event will contain all the hundred columns updated; this filter strips abundant assignments and leaves only the columns that actually changed.
  • and so on...
Applying to Oracle

And so the event, representing a transaction and holding the transformed-and-prepared-for-Oracle data change set, arrives to the doors of OracleApplier. The applier constructs a Prepared Statement, begins a transaction and links the values to it. Each row of the event is executed to Oracle. When it reaches the end of the event, this transaction is committed and data is ready for applications connected to the Oracle instance.

The beauty here is that all the stages mentioned above are working in parallel. Each stage is separated from the other by queues, which work like buffers. Eg. extraction from MySQL binary log is usually very fast, thus it pushes the data to the queue, from which next stage is reading and so on, thus extraction is not blocked.

Summary

I have briefly shared with you how Tungsten Replicator implements real-time replication from MySQL to Oracle. Of course, each topic above could be extended into an article of its own, but for now I just invite you to download the latest Replicator release, try out the MySQL->MySQL replication and, when comfortable, MySQL->Oracle one. My colleague, Jeff Mace, had made it a lot easier to setup advanced topologies like these with recent additions to the tungsten-installer.

Finally, heterogeneous replication projects like these are very fun to work on and I'm sure there will come more. Though cross DBMS type replication is primarily driven by customer demand and customers drive the priorities.

Happy replicating!

2 comments:

  1. Are there any intentions to handle bidirectional replication between both Oracle and MySQL.

    ReplyDelete
  2. Hi! We are working on extracting data using Oracle Change Data Capture (CDC). We hope to have at a prototype in the Tungsten Replicator 2.0.5 build.

    ReplyDelete