Today, organizations face many choices on where to run their applications and what is best for their employees and customers. The use of public (shared) clouds has emerged as an alternative to running applications on a private (with the organization’s data centers). While each has benefits, many enterprises are using both types of clouds. While specific applications may require highly tuned servers, storage, or networking profiles, many applications can be run in either environment.
Enterprises may run hundreds of different applications which consume differing amounts of compute, storage, and networking performance, to deliver agreed-upon SLAs to the customers (users). Highly tuned systems may also include a specific amount of cores, memory, storage types, and networking options unavailable in a public cloud environment. While public clouds offer many types of instances to customers, there are still many configurations that cannot be rented in a public cloud. For example, if terabytes of memory are required for a certain application, this configuration is generally not available on the major public cloud providers. Or, if a small number of cores are needed but with large amounts of memory, generally, public cloud providers don’t deliver.
An on-premises cloud allows enterprises to size a server or storage system to match their workload requirements. However, there may be times when an organization finds that its workloads require access to a public, shared cloud service. In this case, the ability to move applications with a simple user interface would greatly simplify the use of hybrid clouds while increasing performance. In addition, workloads vary throughout the day, week, month, and year. Finally, there may be times when an internal cloud is overloaded, and additional servers or storage is needed. In this case, the ability to automatically send specific workloads to run on a public cloud may be required.
One of the benefits of any cloud service is the use of microservices. Microservices enable applications to be constructed from a number of separate, small services that perform specific tasks. By communicating over a well-defined API, many larger applications can be composed of microservices, where the microservice can be quickly installed or used even when it resides on another server. Microservices allow applications to be rapidly developed and deployed wherever a desired microservice resides.
Enterprise workloads continue to put a strain on previous generations of hardware, but usually not as fast as the hardware performance and capacity advances. In addition, the work per watt of the most modern CPUs is about 7X what was available just five years ago. Taken together, the energy used per application decreases over time, allowing for the virtualization of where an application can be executed, whether on-prem or in the cloud. Consolidation of workloads through virtualization on the latest generations of servers and their CPUs decreases the number of servers required in an on-prem data center, which also reduces overall energy consumption.
Supermicro offers a range of servers and storage systems built on top of the latest Intel or AMD CPUs and accelerators from NVIDIA, Intel, AMD, or others. Whether single, dual, or quad socket systems, the range of servers from Supermicro is application-optimized for both on-prem computing as well as cloud computing.
Supermicro will be showcasing several systems at .NEXT in booth.
4th Gen Intel Xeon Scalable processor-based servers: (can have links for each of these)
- BigTwin – SYS-221BT-HNC8R or SYS -621BT-HNC8R
- Hyper 2U – SYS-2221H-TN24R or SYS-621H-TN12R
- WIO 1U UP – SYS-511E-WR
4th Gen AMD EPYC based servers:
- Hyper 2U – AS -2125HS-TNR
- GrandTwin – AS -2115GT-HNTR
- Cloud DC 1U UP – AS -1115CS-TNR
For more information, please visit: www.supermicro.com/X13 and www.supermicro.com/aplus