Coverting MySQL database character set to UTF8

Recently I was engaged to convert the character set for few MySQL databases from latin1 to UTF8 collation utf8_general_ci. This article describes the approach taken for doing the same.

First I will describe various levels at which we can change the character set and collation and then we will see how to convert existing data in a database to required character set.

Backup your database:

Before even thinking about changing character set for your database, take backup of database using whatever backup method that is tested by you – mysqldump, mysql-enterprise-backup, export etc

In case something goes wrong, we can always have data and recreate requried table/database etc.

Make sure your backups and restore methods are proven, meaning that you have sucecssfully done restore of tables/database etc

Setting Character set at various level:

We can see following parameters for character sets

<pre>root [mysql] >show variables like '%character%set%';
+--------------------------+---------------------------------------------------------------+
| Variable_name            | Value                                                         |
+--------------------------+---------------------------------------------------------------+
| character_set_client     | utf8                                                          |
| character_set_connection | utf8                                                          |
| character_set_database   | latin1                                                        |
| character_set_filesystem | binary                                                        |
| character_set_results    | utf8                                                          |
| character_set_server     | latin1                                                        |
| character_set_system     | utf8                                                          |
| character_sets_dir       | /usr/local/mysql-5.6.16-linux-glibc2.5-x86_64/share/charsets/ |
+--------------------------+---------------------------------------------------------------+
8 rows in set (0.00 sec)

character_set_client – The character set for statements that arrive from the client. If client is not setting any character set while connecting, this character set will be used for statements send by client. Else value set by client during connection will override this value

character_set_connection – The character set used for literals that do not have a character set introducer and for number-to-string conversion.

character_set_database – Character set used by default database. This character set will be used whenever we change database on server and if that database does not have any character set defined.

character_set_filesystem – This character set is used to interpret string literals that refer to file names at file system level, such as in the LOAD DATA INFILE and SELECT … INTO OUTFILE statements.

character_set_results – The character set used for returning query results such as result sets. If client has used character set in its connection, then this value will not be used for returning the result.

character_set_server – Character set defined at the server level. Any new database created will used this character set, unless we are defining character set at database level

character_set_system – The character set used by the server for storing metadata infomration. Example – the return values of the USER(), CURRENT_USER(), SESSION_USER(), SYSTEM_USER(), DATABASE(), VERSION() etc will be retuned in character set assigned to this variable.

character_sets_dir – The directory where all character sets are installed.

We can set character set at following level

Server Level:

We can do this by setting parameter character_set_server in our main my.cnf file. But this needs a bounce. Once MySQL server is bounced it will pick new value of this parameter and new character set will be the one we set for this parameter. But this does not change anything in existing data or objects. Only new database creation will take this effect.

Database Level:

We can alter any database on our MySQL server and change the character set to UTF8. We can use following command:-

ALTER DATABASE <db_name> DEFAULT CHARACTER SET utf8 COLLATE utf8_unicode_ci;

Again, this does not affect existing data or objects. This will only take affect for future objects/tables that we create in this database.

Table Level:

We can use alter table command to set the character set for a table.

ALTER TABLE <table_name> CHARACTER SET utf8 COLLATE utf8_unicode_ci;

So, if you have many tables in database, you can use following command to dynamically generate a script which can be used to set character set for all required tables in database

SELECT CONCAT("ALTER TABLE ",TABLE_SCHEMA,".",TABLE_NAME," CHARACTER SET utf8 COLLATE utf8_unicode_ci;") AS alter_sql
FROM information_schema.TABLES
WHERE TABLE_SCHEMA = '<db_name>';

But this will not change existing columns/data in the table. This will take effect only for new columns getting added to the table.

Changing Character set of existing data:

Above steps are required for setting the character set at various level so that future objects and data will be created in UTF8

Now, for changing character set for existing data, we need to change character set for every text columns of every table in database where we want to change it

We can use multiple approaches for converting character set for existing data and percona blog has provided a very good reference for these methods along with advantages and disadvantages –

Here, I would like to highlight difference between 2 methods that can be used to convert character sets

Doing at table level:

Following command can be used to convert character set at table level

ALTER TABLE <table_name> CONVERT TO CHARACTER SET utf8 COLLATE utf8_unicode_ci;

This will take care of converting the character set for all columns in table. problem with this approach is that, if you have TEXT columns (TEXT, MEDIUMTEXT, LONGTEXT, TINYTEXT, VARCHAR, ENUM), it can end up changing the data type for these columns. Example of the same is giving in above percona blog where TEXT got converted to MEDIUMTEXT

Best and careful way to convert character set is to do it for each text column separately on each table.

You can use following command to change the character set for column

alter table <table_name> change <column_name> <column_name> CHARACTER SET UTF8;

Example:

alter table P1_TABLE change COL1 COL1 TEXT CHARACTER SET UTF8;

In above command/example, we used column name twice. That is required.

But wait, can above approach convert data correctly to required character set?
It may not. Check this article which describes issues we face when we directly try to convert the character set of the column.

Sometimes directly converting can grabble the data. Best way is to convert to binary equivalent and then convert the data type and character set of the column to required once. Following command can be used

alter table P1_TABLE change COL1 COL1 BLOB;
alter table P1_TABLE change COL1 COL1 TEXT CHARACTER SET UTF8;

Since, my column had the text data type, its equivalent is BLOB. Following are the binary equivalent of various text data types

  • CHAR –> BINARY
  • TEXT –> BLOB
  • TINYTEXT –> TINYBLOB
  • MEDIUMTEXT –> MEDIUMBLOB
  • LONGTEXT –> LONGBLOB
  • VARCHAR() –> VARBINARY() (Use same data length)

Automating character set conversion:

You can create simple script with all required commands using following dynamic SQL

Note, that if CHARACTER_SET_NAME is NULL in COLUMNS table for columns, it means that those columns are numbers or binary or of data types which does not need character set conversion

Following dynamic SQL can be used to create automatic script

select concat("alter table ",TABLE_NAME," change ",COLUMN_NAME," ",COLUMN_NAME," BLOB;",
"alter table ",TABLE_NAME," change ",COLUMN_NAME," ",COLUMN_NAME," ",IF(DATA_TYPE in ('varchar','char'),concat(DATA_TYPE,"(",CHARACTER_MAXIMUM_LENGTH,")"),DATA_TYPE)," CHARACTER SET utf8;")
from information_schema.columns
where TABLE_SCHEMA = '<db_name>'
and DATA_TYPE <> 'ENUM'
and CHARACTER_SET_NAME is not null;

ENUM case is different. You need to specify all ENUM value when you convert back to required data type and character set format. Check wordpress codex blog for more details.

Disclaimer: Please do not run above SQLs directly on production without testing. Make sure you write your own SQL as per your need based on database and objects present in your database.

Hope this helps !!

References:

https://codex.wordpress.org/Converting_Database_Character_Sets

Converting Character Sets

http://www.bothernomore.com/2008/12/16/character-encoding-hell/

Cassandra 2.0 Architecture

In this article, I will cover various key structures that makes up Cassandra. We will also see what structure resides in memory and what resides on disk.

In next article, I will give an overview of various key components that uses these structure for successfully running Cassandra. Further articles will cover more details about each structure/components in details

Cassandra Node Architecture:

Cassandra is a cluster software. Meaning, it has to be installed/deployed on multiple servers which forms the cluster of Cassandra. In my previous article, I have mentioned how to install Cassandra on single server using CCM tool which simulates Cassandra cluster on single server.

Each server which are part of cluster is called Node. So node is essentially a server which is running Cassandra software and holds some part of data.

Cassandra distributes data on all nodes in cluster. So every node is responsible for owning part of data.
Node architecture of Cassandra looks like below. It forms ring of nodes.

cassandra_architecture2

 

 

 

 

 

 

Structures in Cassandra

Following are the various structure of Cassandra which is present on each nodes of Cassandra (either on memory or on disk):-

  • CommitLog
  • SSTable
  • MemTable
  • RowCache
  • KeyCache
  • SSTableIndex
  • SSTableIndexSumamry
  • BloomFilter
  • Compression offset

Lets have an overview of each of these structures

CommitLog [Disk]:

Commit log is a disk level file which stores log record of every transaction happening in Cassandra on that node. This file is stored at disk level for each node configured in cluster. When ever transaction happens on a node in Cassandra, commit log on disk is updated first with changed data, followed by MemTable in memory. This ensures durability. People who are familiar with Oracle terminology can consider commit log as online redo logs.

MemTable [Memory]:

Memtable is dedicated in-memory cache created for each Cassandra table. It contains recently read/modified data. When ever a data from a table is read from a node, it will first check if latest data is present in MemTable or not. If latest data is not present, it will read data from disk (from SSTable) and cache the same in MemTable. We have separate MemTable for each Cassandra table so there is no blocking of read or write for individual tables. Multiple updates on single column will result in multiple entries in commit log, and single entry in MemTable. It will be flushed to disk, when predefined criteria are met, like maximum size, timeout, or number of mutations.

SSTable [Disk]:

These are on disk tables. Every Cassandra table has a SSTable files created on disk. SSTable comprises of 6 files on disk. All these files represent single SSTable.
Following are the 6 files present on disk for each SSTable
1) Bloom Filter
2) Index
3) Data
4) Index Summary
5) Compression Info
6) Statistics

Data file (# 3 above) contains data from the table.
All other files are explained when we see the respective components below.

RowCache [Memory]:

This is off-heap memory structure on each node which caches complete row in a table if that table has rowCache enabled. We can control enabling/disabling rowCache on a table while creating table or alter table at later point. For every table in Cassandra, we have a parameter “caching” whose valid values are
None – No Caching
KEYS_ONLY – Only key caching
ROWS_ONLY – Only complete row caching
ALL – Both row and key caching
When a requested row is found in memory in rowCache(latest version), Cassandra can skip all the steps to check and retrive row from on disk SSTable. This provides huge performance benefit.

KeyCache [Memory]:

This is on-heap memory structure on each node which contains partition keys and its offsets in SSTable on disk. This helps in reducing disk seeks while reading data from SSTable. This is configurable at table level and can be enabled using KEYS_ONLY or ALL setting of caching variable for a table. So when a read is requested, Cassandra first check if a record is present in row cache (if its enabled for that table). If record is not present in row cache, it goes to bloom filter which tells whether data might exists on SSTable or that it definitely does not exists in SSTable. Based on result from bloom filter, Cassandra checks for keys in key cache and directly gets the offset position of those keys in SSTable on disk.

SSTableIndex [Disk]:

Primary key index on each SSTable is stored on separate file on disk (#2 file above). This index is used for faster lookups in SSTable. Primary key is mandatory for a table in Cassandra so that it can uniquely identify a row in table. Many times primary key is same as partition key based on which data is partitioned and distributed to various nodes in cluster.

Partition Summary [Memory and Disk]:

Partition summary is an off-heap in-memory sampling of partition index to speedup the access to index on disk. Default sampling ratio is 128, meaning that for every 128 records for a index in index file, we have 1 records in partition summary. Each of these records of partition summary will hold key value and offset position in index. So when read requests comes for a record and if its not found in row cache and key cache, it checks for index summary to check offset of that key in index file on disk. Since all index records are not stored in summary, it gets a rough estimate of offset it has to check in index file. This reduces disk seeks.
Partition summary is also stored on disk in a file (#4 file above).
partition summary looks like below

partition_summary

 

 

 

 

 

 

 

 

Bloom Filter[Memory and Disk]:

Bloom filter is a off-heap in-mmeory hash based probabilistic algorithm that is used to test if a specific member is part of set or not. This can give false positive, but it can never give false negative. Meaning that a bloom filter can tell that a record might be present in that table on disk and we may not find that record, but it can never say that record is not present when its actually present on disk. This helps in reducing unnecessary seeks for data which is not present at all.
Bloom filter is also present on disk file (#1 file above) and contains serialized bloom filter for partition keys.

Compression offset maps[Memory and Disk]:

Compression offset maps holds the offset information for compressed blocks. By default all tables in Cassandra are compressed and more the compression ratio larger the compression offset table. When Cassandra needs to look for data, it looks up the in-memory compression offset maps and unpacks the data chunk to get to the columns. Both writes and reads are affected because of the chunks that have to be compressed and uncompressed. Compression offset maps is stored as off-heap component of memory. Its also saved on disk in separate file for each SSTable (#5 file above).

So if we put all above structure together and identify them what all present on disk, on-heap memory and off-heap memory, it will look like below

cass_arch

 

 

 

 

 

 

 

 

 

 

 

Hope this helps !!