Tag Archives: MySQL

Oracle will boost MySQL, release Cloud Office suite

Oracle Corp. promised to aggressively push its newly acquired MySQL open-source database, rather than kill it.

Oracle also plans to continued to invest in and maintain the independence of OpenOffice.org, the longtime Microsoft Office challenger from Sun Microsystems Inc., but it will also launch a separate cloud productivity suite that’s similar to Google Docs.

Many users feared that Oracle would bury MySQL, a lightweight database that’s gaining acceptance in corporate enterprises after starting out as a favorite of Web start-ups, to avoid cannibalizing its flagship Oracle Database.

But Oracle plans to keep MySQL’s sales team independent while improving MySQL’s code, support and compatibility with other Oracle apps. Oracle will oversee MySQL, OpenOffice.org and other open-source apps in Oracle’s Open-Source Software division.

During a webcast briefing, Oracle executives said the company’s acquisition of Sun’s many top-notch technologies will allow the combined company to offer “complete, open, integrated systems.” That vow appears to be a challenge to IBM, which prior to the Oracle-Sun combination, was considered the largest proponent of enterprise use of open-source technologies.

Here’s a look at some of the highlights of today’s presentation:

Java: Oracle plans to “extend and enhance the reach of Java,” according to Thomas Kurian, an executive vice president. This will be achieved by integrating and simplifying the Java platform runtime — specifically, delivering Version 7 of the Java Standard Edition client for desktop PCs with a variety of improvements, while making the mobile version, Java ME, compatible with the desktop version to lessen work for programmers.

Oracle also vowed to make it easier for Web developers using JavaScript to work with Java. These moves, he said, will all help revitalize the Java developer community, numbers 10 million.

Finally, The JavaOne show will remain independent, but it will now take place during the Oracle OpenWorld conference, which is scheduled to be held in San Francisco in September.

OpenOffice.org: OpenOffice.org will be managed as an independent business unit, with Sun’s development and support teams retained. Oracle will continue to support the free community edition of OpenOffice.org. However, Oracle also plans to deliver a cloud offering called Oracle Cloud Office, which had been under development for a while. Oracle did not comment on the fate of StarOffice, the paid, supported version of OpenOffice.org that competes with IBM’s own OpenOffice.org-based Lotus Symphony.

Solaris: Sun open-sourced its longtime server operating system in 2005. A former Sun executive who now serves as executive vice president of hardware engineering at Oracle, said the company plans to increase investment in Solaris so that it, among other things, will be able to scale to run thousands of CPU threads simultaneously and handle multiple terabytes of memory.

Linux: Oracle has thousands of customers for Unbreakable Linux, its supported version of Red Hat Linux, Screven said. Oracle will now invest in both Linux and Solaris.

Advertisements

How Big MySQL Tables Can Be

MySQL 3.22 had a 4GB (4 gigabyte) limit on table size. With the MyISAM storage engine in MySQL 3.23, the maximum table size was increased to 8 million terabytes (2 ^ 63 bytes). With this larger allowed table size, the maximum effective table size for MySQL databases now usually is determined by operating system constraints on file sizes, not by MySQL internal limits.

The InnoDB storage engine maintains InnoDB tables within a tablespace that can be created from several files. This allows a table to exceed the maximum individual file size. The tablespace can include raw disk partitions, which allows extremely large tables. The maximum tablespace size is 64TB.

The following table lists some examples of operating system file-size limits. This is only a rough guide and is not intended to be definitive. For the most up-to-date information, be sure to check the documentation specific to your operating system.

Linux 2.2-Intel 32-bit – 2GB (LFS: 4GB)
Linux 2.4 (using ext3 filesystem) – 4TB
Solaris 9/10 – 16TB
NetWare w/NSS filesystem – 8TB
win32 w/ FAT/FAT32 – 2GB/4GB
win32 w/ NTFS – 2TB (possibly larger)
MacOS X w/ HFS+ – 2TB

On Linux 2.2, you can get MyISAM tables larger than 2GB in size by using the Large File Support (LFS) patch for the ext2 filesystem. On Linux 2.4, patches also exist for ReiserFS to get support for big files (up to 2TB). Most current Linux distributions are based on kernel 2.4 and already include all the required LFS patches. With JFS and XFS, petabyte and larger files are possible on Linux. However, the maximum available file size still depends on several factors, one of them being the filesystem used to store MySQL tables.

For a detailed overview about LFS in Linux, have a look at Andreas Jaeger’s Large File Support in Linux page at http://www.suse.de/~aj/linux_lfs.html.

Windows users please note: FAT and VFAT (FAT32) are not considered suitable for production use with MySQL. Use NTFS instead.

By default, MySQL creates MyISAM tables with an internal structure that allows a maximum size of about 4GB. You can check the maximum table size for a table with the SHOW TABLE STATUS statement or with myisamchk -dv tbl_name.

If you need a MyISAM table that will be larger than 4GB in size (and your operating system supports large files), the CREATE TABLE statement allows AVG_ROW_LENGTH and MAX_ROWS options. You can also change these options with ALTER TABLE after the table has been created, to increase the table’s maximum allowable size.

Other ways to work around file-size limits for MyISAM tables are as follows:

* If your large table is read-only, you can use myisampack to compress it. myisampack usually compresses a table by at least 50%, so you can have, in effect, much bigger tables. myisampack also can merge multiple tables into a single table.
* Another way to get around the operating system file limit for MyISAM data files is by using the RAID options. See section
* MySQL includes a MERGE library that allows you to handle a collection of MyISAM tables that have identical structure as a single MERGE table.

MySQL Crash Recovery

MySQL is known for its stability but as any other application it has bugs so it may crash sometime. Also operation system may be flawed, hardware has problems or simply power can go down which all mean similar things – MySQL Shutdown is unexpected and there could be various inconsistences. And this is not only problem as we’ll see.

MySQL has angel process mysqld_safe which will restart MySQL Server in most cases. It is great, unless you have run into some bug which causes it to crash again – such crashes qucikly following one another are kind of worse because they explore many less tested code paths in MySQL and so problem potential is larger.

So lets look at the problem which happen during the crash which might need to take care of or which may seriously affect MySQL Performance.

MyISAM Corruption – If you’re writing to MyISAM tables there is very large chance of them becoming corrupted during the crash. Note corruption may be hidden and do not expose itself instantly – you may notice wrong query results days after crash. Sometimes corrupted tables may be reason for further crashes or hangs, and corruption may spread itself further in the table. You probably do not want any of these so it is very good idea to run MySQL with myisam_recover option which will make sure all improperly closed MyISAM tables are checked first time it is accessed. This option is however rather painful to use with web applications – users may issue different queries which may trigger check/repair running for many tables at onces, which typically make system extremely slow and also can use up all allowed connections or run out of memory ( myisam_sort_buffer_size is normally set pretty lage). If this becomes the problem I use tiny script which moves out all MyISAM tables out of MySQL database directory, checks them with MyISAMchk and moves them back to running server. This looks scary but it works great – until table is checked and ready application gets error rather than stalling forever which allows application to become partially functional as soon as possible. This hack is needed only in some cases – in most cases using Innodb for tables which you need to be recovered fast is better solution.

Innodb Recovery – Unless you have some hardware problems (99%) or found new Innodb bug (1%) Innodb recovery should be automatic and bring your database to consistent state. Depending on innodb_flush_lot_at_trx_commit setting you may lose few last committed transactions but it is it. It is Performance of this process which may cause the problems. As I already wrote innodb_log_file_size and innodb_buffer_pool_size affect recovery time significantly as well as your workload. I should also mention if you have innodb_file_per_table=1 your recovery speed will depend on number of Innodb tables you have, as well as many other operations, so beware.

Binary log corruption – Binary log may become corrupted and out of sync with database content. This will sometimes break replication but if you’re just planning on using binary log for point in time recovery it can go unnoticed. sync_binlog Is helping by syncing binary log, but at performance penalty. If using Innodb you also might with to use innodb-safe-binlog option in MySQL 4.1 so your Innodb log and binary log are synchronized. In MySQL 5.0 XA is taking care of this synchronization.

.frm Corruption – Few people know MySQL is not really ACID even with Innodb tables, at least not for DDL statements. There is a chance of failing for example during CREATE statement with table created in Innodb dictionary but .frm not created or not completely written. Partially written .frm files or .frm being unsync with internal Innodb dictionary may cause MySQL to fail with wierd error messages. In MySQL 4.1 sync_frm option was added which reduces this problem as time window when it can happen is much less. Still if failure happens just during writting .frm file nasty things may happen, not to mention such potentially multiple operation DDL statements as RENAME TABLE – these are most vulnerable.

master.info corruption – If slave happens to crash you can also have relay logs corruption and master.info being corrupted. Not to mention MyISAM tables can contain partially completed statements as well as some of updates totally lost. The safe approach it to reclone the slaves if they crash or you can take the risks and try to continue. Sometimes you might be able to manually find appropriate position even if master.info file is out of sync but I would not be basing my failure handling scenarios.

Cold Start – If you restart MySQL server its caches (key_buffer, innodb_buffer_pool, query_cache,table_cache) are cleaned, so may be OS caches. This may reduce performance dramatically. So if you’re bringing server back after crash you might want to populate caches. For MyISAM key_cache this can be done by using LOAD INDEX INTO CACHE statement, for other storage engines it can be done by issuing large index scan queries. Full table scan queries allow to preload table data ether in storage engine caches or in OS cache. You can save these into .sql file and use –init-file to make sure it is run on startup. The other approach is to prime server with real servers (ie clone queries from other slave) before putting traffic to it.
In case application is not highly available so there is only one server you might with to start serving only some users initially (returning error to others) and gradually increase the load as server warms up. This may sound strange but makes a lot of sense as not only waiting for pages which never load is more frustrating for users than getting honest “try again later” message, but also – warmup takes longer time on extreme load.

Innodb statistics – Unlike MyISAM Innodb does not store index cardinality in tables, instead it computes them on first table access after startup. This may take significant time if you have very large number of tables (Some users have hundreds of thousands of tables per database host). This one is pretty much part of cold start problems but I wanted to point out it separately. To warmup this data you might run select 1 from _table_ limit 1 for each table or any other statement – it is table open which is important.
There are other problems which you may experience related to MySQL Crash Recovery – Restoring data from backup, corrupted Innodb tablespace recovery etc but I should write about them some other time.

Reference by : http://www.mysqlperformanceblog.com/2006/07/30/mysql-crash-recovery/

Understanding MySQL Query Cache for PHP Developers

Problem Statement:

Many PHP developers using MySQL have unclear understanding of the MySQL query cache. So we decided to write a series of introductory articles to get everyone on the same page. This article is the first installment of the series and here we will introduce the basics of query cache in MySQL. Note that unlike a typical book chapter, this article will be of low-fat flavor — less theory and more actionables — of an introduction to query caching for MySQL.
What is a MySQL query cache?

It turns out that MySQL has a built-in query cache that can cache a specific type of queries — SELECT statements — to speed up delivery of the result sets. The cache can increase performance for many instances but can also hurt performance if not used wisely.
What can be cached in the MySQL query cache?

Only SELECT statements can be cached. This does not include prepared SELECT statements. Query caching only works for SELECT statements that are fully qualified and returns same result every time. This means you cannot use non deterministic functions that return data depending on situation. For example:

// Following SELECT query can be cached
$stmt = “SELECT * FROM user WHERE active = 1”;

// Following SELECT query cannot be cached
$stmt = “SELECT * FROM user where signup_date >= NOW()”;

// Following SELECT query cannot be cached
$stmt = “SELECT count(*) FROM user”;

Here are the requirements a query must meet to take advantage of the query cache:

* Only exact queries are serviced from the cache — must match the stored query in exact detail.
* Queries with placeholders — such as the ones for prepared statements — are not cached in query cache
* Queries with user defined functions or non-deterministic functions cannot be cached
* Any table changes (such as issuing of an ALTER statement) will remove the queries from the cache for that table

Introduction to query cache parameters

The more you understand the query caching parameters, the better you are going to be at tuning the query cache to your advantage. First find out what are the global query caching parameters that you can fiddle with using the following query at the mysql command-line prompt.

mysql> show global variables like ‘%query_cache%’;

A sample output is shown below:

+——————————+———–+
| Variable_name | Value |
+——————————+———–+
| have_query_cache | YES |
| query_cache_limit | 1048576 |
| query_cache_min_res_unit | 4096 |
| query_cache_size | 536870912 |
| query_cache_type | ON |
| query_cache_wlock_invalidate | OFF |
+——————————+———–+
6 rows in set (0.00 sec)

The purpose of these parameters are described briefly as:

* have_query_cache – size of query cache in bytes
* query_cache_limit – the maximum size of result set (default: 1048576 bytes or 1 MB). If your query returns result set that is greater than the limit set here, it will NOT BE CACHED
* query_cache_min_res_unit – the smallest block size allocated by query cache. Default is 4KB
* query_cache_size – the total memory available to query cache
* query_cache_type – when set to ON or 1, query caching is on for all applicable queries, when set to OFF (0) query caching is turned off and when set to DEMAND or 2, caching is on for queries with SQL_CACHE directive in the query
* query_cache_wlock_invalidate-causes the query cache to invalidate any query in the cache if a write lock is executed against the table(s) it uses

Whats your query cache status right now?

To find out whats going on with your query cache, run the following command from the MySQL command-line prompt:

mysql> show status like ‘%qc%’;

Here is a sample result:

+————————-+———–+
| Variable_name | Value |
+————————-+———–+
| Qcache_free_blocks | 978 |
| Qcache_free_memory | 527371984 |
| Qcache_hits | 645545 |
| Qcache_inserts | 130796 |
| Qcache_lowmem_prunes | 0 |
| Qcache_not_cached | 417579 |
| Qcache_queries_in_cache | 4973 |
| Qcache_total_blocks | 11167 |
+————————-+———–+
8 rows in set (0.00 sec)

Here are some brief explanations of these status metrics:

* Qcache_free_blocks – number of memory blocks marked as free, which indicates memory fragmentation
* Qcache_free_memory – total amount of memory free for query cache
* Qcache_hits – number of times query result was found in the query cache
* Qcache_inserts – number of times queries were written to the query cache
* Qcache_not_cached – number of queries removed from cache due to low cache memory
* Qcache_queries_in_cache – number of queries that could not be cached
* Qcache_total_block – total number of blocsk in query cache

Calculating query cache hits vs misses

Here is the formula for calculating hit ratio for query cache:

$totalSelectQueryCount = $comSelect + $qcacheHits
$percentHits = ($qcacheHits * 100)/ $totalSelectQueryCount

What the above formula does is adds up all the SELECT queries in the system using two MySQL global variables: com_select and qcache_hits.

To set $comSelect, run show global status like ‘%com_select%’ query. For example:

mysql> show global status like ‘%com_select%’;
+—————+———+
| Variable_name | Value |
+—————+———+
| Com_select | 1739663 |
+—————+———+

To set $qcacheHits, run show status like ‘%qcache_hit%’. For example:

mysql> show status like ‘%qcache_hit%’;
+—————+———-+
| Variable_name | Value |
+—————+———-+
| Qcache_hits | 20786961 |
+—————+———-+

With the above sample number, the percent hit is 92.28% which is great.
Managing query cache

To manipulate your query cache, you can use the following MySQL statements from the MySQL command-line:

To remove all the queries from your query cache, run:

RESET QUERY CACHE;

To defragment the query cache memory, run:

FLUSH QUERY CACHE;