My analysis shows that, on average

  • Rightsizing onto modern CPUs in AWS can reduce the target CPU core requirement by more than one-third.

  • Migrating suitable databases from Oracle EE to RDS Oracle SE can free up one-fifth of your on-prem licensing for re-use or retirement.

  • Up to half of all databases are candidates for migration to Open Source PostgreSQL.

Let’s dig into how…

I am proud to be part of a team of world-class database architects delivering solutions based on our technical and commercial expertise here at Cintra. Together, we are in the eye of the storm of Oracle workloads moving to the cloud. We migrate database workloads to all clouds; however, AWS is the cloud of choice for many organizations due to the breadth, depth, and maturity of database and analytics offerings.

When talking to customers, I often hear the same story, “I need to move my Oracle systems to the cloud, but the licensing from on-premises into the cloud is a challenge, and we cannot see a way around it without being faced with an increase in our license requirements.

We address this challenge head-on with the “Database Optimization and Licensing Assessment (DBOLA).” DBOLA is an early Assess Phase activity that focuses on helping customers cut through the fear, uncertainty, and doubt (FUD) and navigate complex architecture and licensing scenarios to prescribe the best migration path to AWS focused on their Oracle and SQL Server database estate

Every database migration partner can use their own set of tools to do the assessment. We at Cintra have built a powerful and unique zero-impact discovery tool called Rapid Discovery. The tool can discover the complete spectrum of legacy and modern Oracle platforms and database software versions. Our approach automatically captures in-depth information on infrastructure, database, and Oracle workloads with a single-step discovery process that our customers can deploy directly. It is agentless, fast, and non-intrusive, without any impact on the source environment’s security or performance. This discovery approach typically achieves fast security approval and helps avoid weeks or months of data discovery, deployment of agents, or opening firewall ports to extract discovery data.

Working with organizations globally over the past year, our team has assessed hundreds of workloads representing thousands of databases. When considering the compound value of the aggregated data, I couldn’t help myself but do some analysis of the data.

The Data

We gather metadata, including database configurations, workload metrics, and license usage, which is used to define target architecture designs automatically. Note: no sensitive customer data within the database is collected during the discovery process.

The data set analyzed represents the following:

  • Customers across many industries, including transportation, energy, manufacturing, technology, and government.

  • Many geographies, including Europe, North America, South America, and Africa

  • Database estates between 2 to 2500+ databases.

  • Databases running on Linux (OEL, RHEL, SUSE, CentOS), Solaris (Sparc and x86), IBM AIX, and Windows; virtualized (VMWare, OVM, Xen, Solaris zones, AIX LPARss, AWS EC2) or metal and Oracle Engineered Systems including Exadata, ODA, and SPARC SuperCluster.

  • Database sizes from 1 GB to 180+ TB, from single-instance/VM database servers to multi-node RAC clusters

  • Oracle versions from 8i up to 21c

The following table describes some useful dimensions of the dataset:

Total count of databases in dataset

5,769

Total DB Size, GB

6,822,230

Total SGA+PGA, GB

169,680

Total count of hosts (physical/virtual)

3,163

Hosts by operating system - AIX

74

Hosts by operating system - Linux

2050

Hosts by operating system - Solaris

176

Hosts by operating system - Solaris x86

718

Hosts by operating system - Windows

145

Total source CPU cores in dataset

15,629

Hosts by CPU vendor - AMD

17

Hosts by CPU vendor - Intel

2896

Hosts by CPU vendor - IBM

74

Hosts by CPU vendor - Sun/Oracle

176

Average CPU age in dataset

6 years

Oldest CPU in dataset - UltraSPARCT2

Q4'2007

Newest CPU in dataset - IBMPower10

Q3'2021

In this article, I will describe my findings related to one of the most common questions: licensing. I have applied our standard rules for optimization over the entire estate, capturing the resulting AWS shapes/storage and the basic target licensing metric: count of vCPU.

For the sake of simplicity, I will not dig into specific Oracle options and packs, although we do in each specific customer assessment, often finding optimization opportunities.

First Level of Optimization: 131.37% of Licenses Required

The assessment dataset aggregates up to 15,629 physical CPU cores on-premises, with the initial right-sizing of databases to AWS RDS requiring 10,266 CPU cores. This represents almost a 35% reduction right out of the gate. There are two key factors enabling this initial reduction:

  • CPU models and ages span from 2007 to 2021; the average CPU age across the dataset is six years. AWS runs on the latest hardware. Therefore for customers running on aging hardware, we typically realize an immediate reduction of the number of cores required to run the same workloads in AWS. We call this CPU Power right-sizing, which is the most basic of our optimization techniques. Reduction in cores translates directly into savings in most cases where Oracle is licensed on a per-core basis.

  • Rapid Discovery also gathers detailed information on the server sizing and the actual utilization data (average, p90, p95, and max). This allows us to right-size the architecture correctly to the cloud while ensuring it meets or exceeds performance. The flexibility of the cloud allows us to scale up and back down as needed.

(as a side note, we do the same right-sizing at the storage layer based on precise I/O workload measurements. This is as critical as CPU, especially when migrating from systems such as Exadata. We will dig deeper into the topic in the next part of the series)

Now, there will always be some outlier technical challenges. For example, modern IBM Power CPUs have high-clockspeeds, so right-sizing can be less effective. However, the biggest challenge is always the Oracle licensing policy.

When licensing Oracle on AWS and Azure, you must be aware of the Oracle Cloud Policy, a publicly facing non-contractual guide for licensing Oracle on virtual cloud infrastructures. This policy provides customers with guidance on how to count licenses in the clouds such as in Amazon EC2 or RDS. The short version is that when Oracle is deployed on virtual CPUs (vCPUs), you must count two vCPUs (one core) as one license. It is important to note that when deploying Oracle on 1/ EC2 Dedicated Hosts, 2/ Amazon EC2 Bare Metal, or 3/ VMWare Cloud on AWS, you count a core as a core, so no 2:1 penalty is applied.

Such deployments have strong use cases. However, they are not efficient for most of the workloads we analyze. So, for this analysis, we’ll follow the general customer desire to move to AWS virtualized EC2 infrastructure or managed RDS database services. So, the Oracle Cloud Policy generally doubles the needed license, right? Well, not quite. It converts our physical core count of 10,266 cores into 20,532 vCPUs, translating to 131% of the on-premises license needs. The results may vary across different organizations, and business cases – e.g., migrations out of SPARC are very challenging; densely consolidated solutions are also problematic. But this is just the journey’s beginning, so let’s dig deeper.

The Second Level of Optimization: 80.27% of Licenses Required

The following Oracle optimization is to convert Enterprise Edition (EE) to Standard Edition (SE) where possible. Many customers assume they need to run Enterprise Edition to meet availability requirements. However, with AWS features like CloudWatch for monitoring, Multi-AZ for availability, and cross-region backup replication for DR, SE can be run in place of EE for many real-world production workloads. This can free up costly EE licensing for other uses, such as migration swing capacity, making up for the license gap, putting it back on the shelf, or retiring them altogether. Most importantly – you can use Oracle SE as a License Included (and not BYOL) option in AWS. You can scale your SE instances up and down and even stop them when unused, so you only pay for what you need. This brings cloud elasticity benefits to the rigid world of Oracle licensing.

While Oracle Standard Edition may not fit some critical databases well, our analysis determined that 47% (2,711 databases) were good candidates to move to Standard Edition licensing. Our assessment process automatically detects the database workload and usage (Oracle options/packs/features, relevant workload parameters, etc.) and identifies EE to SE conversion candidates across large database estates. Using those rules (born out of real-world experience), we end up with 12,546 EE cores or 80.27% of the on-prem license.

Identify database suitable for Oracle EE to SE migration

EE to SE targets identified

2,711 out of 5,769 (46.99%)

Target EE vCPU eliminated

7,986 out of 20,532 (38.90%)

EE vCPU remaining

12,546

% of on-prem license after SE migration

80.27%

Often there are application-level reasons for not using SE for some of the candidates identified. This may force lower savings than the opportunities we identify. However, in many cases, we can target enough SE candidates to migrate to AWS and save on the license even at this stage. And this is just the beginning.

The Third Level of Optimization: 70.64% of Licenses Required

Most organizations want to get to the cloud due to a compelling event – end-of-life hardware, data center exit, software contract renewals, etc. This may require them to migrate quickly into the cloud using an Oracle-to-Oracle migration strategy. This approach is also a less risky first step. Once a given workload is migrated to AWS, we can apply different techniques to optimize both licenses and cost further.

Many of the customers we speak with have the ambition to move their database workloads to Open-Source database engines such as PostgreSQL. This has many merits – not only does it not require expensive license and support contracts, but these databases work better in cloud-native environments. Products like Amazon RDS for PostgreSQL and Amazon Aurora, Google Cloud SQL for PostgreSQL and Google AlloyDB, and Azure Database for PostgreSQL help prevent lock-in. In my previous article, I dig deeper into the benefits of the AWS Aurora storage engine. While refactoring databases can sound daunting, PostgreSQL is semantically similar to Oracle and generally easy for Oracle DBAs to learn.

Identify databases suitable for Oracle to Postgres migration (wave 1)

Postgres targets identified for rapid migration

1,866 out of 5,769 (32.35%)

Target EE vCPU eliminated by Postgres wave 1

4,954 out of 20,532 (24.13%)

EE vCPU remaining after wave 1 alone (no SE)

15,578

% of on-prem licence after Postgres wave 1 (no SE)

99.67%

EE vCPU left after SE and Postgres wave 1

11,040

% of on-prem licence after SE and Postgres wave 1

70.64%

Based on the data gathered, our assessment automation quickly identifies databases with little to no Oracle dependency – the “low-hanging fruits” for rapid PostgreSQL migration. We can also help with the setup, migration, and operation. The dataset I am analyzing shows that by combining SE and Postgres migrations, we can go down to 11,040 Oracle EE cores, or 70.64% of the original on-prem license.

Continuous Improvement: 58.31% of Licenses Required

Customers are often willing to dig deeper after seeing successful migration to Postgres of the “easy” targets. At this point, we can provide tooling for the automated migration of PL/SQL code that can speed up and lower the cost of such migrations.

Technically speaking, every workload can be migrated. However, there are complex workloads for which the migration out of Oracle is impractical – it is either too complicated or risky. Some off-the-shelf and in-house built systems will stay on Oracle until retired or rewritten from scratch. However, we identified enough candidates for Postgres migration to achieve a lower number of 9,144 vCPU cores left on Oracle EE. This is 58.3% of the original license usage of the dataset. And while app-related considerations may initially prevent the migration of some of the identified candidates, we still have significant potential for license savings.

Identify databases suitable for Oracle to Postgres migration (wave 1+2)

Postgres targets identified wave 1+2

2,899 out of 5,769 (50.08%)

Target EE vCPU eliminated by Postgres wave 1+2

7,618 out of 20,532 (37.10%)

EE vCPU remaining after wave 1+2 alone (no SE)

12,914

% of on-prem license after Postgres wave 1+2 (no SE)

82.63%

EE vCPU left after SE and Postgres wave 1+2

9,114

% of on-prem licence after SE and Postgres wave 1+2

58.31%

What Аbout the Мigration Аpproach

We typically suggest that our customers start with Oracle-to-Oracle migration first, with consideration towards Amazon RDS Oracle. The benefits of a managed RDS database free up the DBAs from operational management of the environment, opening up additional capacity towards modernizing into AWS open database engines later. Our experience shows that it is a lower-risk migration approach – and can be delivered faster and with minimal impact on the business. We recommend the following best practice.

  • Discovery and initial assessment: Here, you receive detailed guidance on which service is the best fit for each of your databases and what is the preferred deployment (what instance, Singe/Multi-AZ, read replicas, storage type, backup, and DR, etc.). You also get price and license estimation for all needed components. This assessment is free for the customer and covered under DBOLA.

  • Oracle-to-Oracle migration: This includes the next level of detailed assessment; a PoC where needed; database optimization; and the actual migration. At this step, we typically achieve license neutrality or a better outcome. The AWS Migration Acceleration Program (MAP) program can partially offset the migration cost, starting with the MAP Assess-funded migration planning and TCO business case.

  • The first wave of Oracle to Postgres: While we discuss the potential Postgres targets at the initial assessment phase, we go for the engine change after the initial migration. Here we address the “low-hanging fruit” – databases with lower risk and complexity for the database engine change.

  • Oracle to Postgres wave 2: Here, we start bearing the fruits of the migration to AWS at scale. We can help the customer with tools for automatically translating the schema and PL/SQL code. With this step, we get the most significant improvement in EE license exposure.

  • Start leveraging purpose-built databases: This may require changes in application architecture, so it is not the first step. However, this change brings the actual value of the cloud.

  • Application modernization: This one parallels the previous bullet point. Modern data stores are very efficient when used by modern apps. The main topics here are containerization, microservices, and resilient architectures.

Conclusion

If you have followed my posts, talks, and presentations in the last few years, you know my position: any workload can be migrated to the cloud. As an engineer and an Oracle DBA with decades of experience, I state this based on real-world experience.

Licensing is not typically my focus area. I intentionally skipped the licensing topic in previous articles on migrating Oracle to AWS. Things changed with my move to Cintra. I am working within a team of heavyweight Oracle architects and licensing champions, migrating solution after solution for real customers. I’ve gathered enough data (and some experience) to address this topic.

My analysis demonstrates that the licensing can be neutralized, and additional cost savings can be realized by following a strategy to modernize onto open-source database engines like PostgreSQL. All the techniques described here – right-sizing, SE migration, Postgres migration – bring actual, tangible results – as the dataset proves. Every customer is different, but we are consistently able to find solutions. This makes me even more confident that every workload can be migrated into AWS.

Also, remember that this is just the beginning of the cloud journey. It enables you to start digging into cloud-native purpose-built databases for modernization.

Total

% of source cores

% of target vCPU

DB count

5,769

Source CPU cores

15,629

100%

n/a

Target CPU cores

10,266

65.69%

50.00%

Target vCPU

20,532

131.37%

100.00%

License after migration to SE

12,546

80.27%

61.10%

Licence after SE + Postgres wave 1

11,040

70.64%

53.77%

Licence after SE + Postgres wave 1+2

9,114

58.31%

44.39%

Insights