3tera provides a virtual data center solution. Just like VMWare virtualizes physical computers, 3tera’s AppLogic virtualizes entire data centers. This is incredible technology that provides huge time-savings and simplicity. They have brought utility computing to the masses. Now, administrators can manage thousands of machines in an instant. Here is an exclusive interview with Bert Armijo, Co-Founder and Senior VP Of Sales and Product Management, at 3tera.
With so many technologies out there what does 3tera AppLogic offer?
More than a decade after the Internet became an integral part of all our lives, deploying and scaling an online services is still incredibly difficult and expensive because most of the technologies being used were designed for enterprise IT. A few large internet behemoths (like Google, Amazon and eBay to name few) have developed their own infrastructure systems, most based on grid architectures, but these are unapproachable for smaller firms.
3tera has designed our AppLogic grid operating system specifically to enable utility computing for Web 2.0 and SaaS applications – providing the ability to build, deploy, manage and scale entire distributed systems using utility resources. I’m not referring to some kind of pre-canned software packages, but complete custom infrastructure from firewall and load balancers to Web servers, Database servers and NAS, all controlled with a browser.
To make this view of utility computing possible AppLogic actually packages web applications, entire distributed systems with infrastructure, code and data into portable, scalable entities. When you hit run, that definition is then expressed on the grid. Your software never knows – it thinks it’s running on standard hardware.
What are some areas in which AppLogic is heavily used?
The majority of applications running on AppLogic are Web 2.0 and SaaS, because for these users IT operations are the major portion of COGs (cost of goods). For many of these users, utility computing is an enabling technology because it reduces the time-to-market and more importantly the capital required to bring their application online.
However, since our scalability demonstration in June, when we showcased the largest grid system of commodity servers with almost 500 CPUs, 1TB of RAM and 50TB of storage under the control of a single user., we’ve been getting more interest from enterprise users. To be clear, these are leading edge firms and their projects are still in proof of concept, but we see enterprise IT leaders now recognizing that a data center crunch is coming and looking for ways to utilize utility computing both in-house as well as from 3rd party providers.
Where do you think the future of data centers is heading?
We’re at the beginning of a huge transition – removing human hands from servers. This is a fundamental shift, because if you think about the way we run data centers today, from where we build them to the temperature they’re kept at, it’s all driven by an assumption that people need to interact physically with the servers. Even the concept of a server as a physical box is based on that. Utility computing breaks that model. In a utility computing data center when a server fails, there’s no system administrator scrambling to reset it or replace it. Instead the system simply deactivates it and maintenance can be delayed until a cost effective number of servers require attention. That could be a day or a month – or until the whole rack is decommissioned. With this change in usage I believe over the next ten to fifteen years we’ll see major shifts in data center design and operation. What I’m talking about is a vision for a truly green data center, where power and cooling bring huge savings to the operations.
First, we’ll be able to move data centers close to the source of power. Today, data centers are located near businesses because of the need for skilled labor. That requires transmission of power at great distance. Building new data centers close to power plants will save 20 to 25% in energy typically lost in transmission. Google’s already shown this works. Each one of their new data centers is close to sources of power.
Second, air is a terrible conductor of heat, and relying on it to cool servers wastes enormous amounts of energy just pumping it around the data center. Liquid cooling hasn’t been feasible because of the frequent changes in hardware we accepted as a necessity in the data center. However, when the failure of a server no longer requires intervention, liquid cooling becomes quite cost effective because servers can be completely sealed. In fact, what we think of as a server today may become just a portion of a larger system in the future.
Last, and perhaps most outlandish to some people, I think, we’ll see renewed interest in systems operating at super conducting temperatures. Over the past decade we’ve seen the speed of CPUs plateau while emphasis has been on building more cores on a chip. If we have sealed liquid cooling systems to run in large utility computing data centers, chip vendors can begin to investigate operating conditions that most of us wouldn’t have been able to maintain in the past. The energy savings achievable operating at those temperatures could dwarf all the efforts ongoing today.
Is AppLogic a competitor to VMWare, Xen, and Parallels?
Traditional virtualization platforms slice physical servers into virtual machines to increase utilization. Figuring out how to make numerous virtual machines a system is left to the administrator.
AppLogic is a grid operating system that we designed specifically to enable utility computing – the ability to build, deploy, manage and scale entire distributed systems using pay-as-you-go resources. To make this possible AppLogic actually packages entire systems, infrastructure, code and data into portable, scalable entities.
So, in a sense, while virtualization is about slicing up resources, AppLogic is about aggregation.
Is AppLogic a competitor to Amazon’s EC2 and S3 services?
Amazon’s EC2 is probably the best known utility computing service, but this market is just getting started and most users haven’t heard of, let alone tried, either service. The real competition to AppLogic is the status quo.
To put the competitive landscape in perspective, with AppLogic you can get presence in any of several data centers in the US, a few in Europe and soon in data centers in Asia and Australia as well. With AppLogic users are able to run not only a canned set of apps, but they have the ability to define and scale their own custom infrastructure – all with just a browser. None of this was possible just six months ago.
What is the scalability of AppLogic technology?
By partnering with hosting providers, 3tera is able to leverage their expertise and inventory. Our partners currently run almost 100,000 servers. So getting hardware resources isn’t a problem.
On the system side, a few months ago we demonstrated almost 500 cpus, 1TB of RAM and 50TB of storage under the control of a single user. With each release we extend the resource boundary a little further. Today, if someone ordered 1,000 CPUs and 4TB of RAM, that wouldn’t be a problem to deliver within hours for any of our partners.
What platforms does AppLogic support or integrate with?
AppLogic runs on standard x86 architecture servers and the current release runs most Linux distributions, including CentOS, Debian, Fedora, SUSE, RHEL and others as guest operating systems.
We’re working with Sun now to add support for running Solaris and then we’ll work on FreeBSD.
How much does it cost?
Most users choose hosted resources instead of bringing AppLogic in-house, following the full utility computing value proposition. The entry price for a development system is around $500/mo, which includes 8 CPU-cores, 8GB of RAM, 750GB of storage and 1.5TB of transfer. Production packages start around $1699 and offer more data transfer and greater amount of fully mirrored storage for high availability.
In house licenses for enterprise users start at around $150/server per month.
What’s next?
Most of our users are looking to have multiple point of presence for their applications, both for redundancy as well as for lower latency user access. These have been incredibly difficult and expensive in the past. Right now we’re working to eliminate the visible boundary when operating services on multiple continents, making it possible for any developer to operate a multi-national service – again, with just a browser.
We are also working on a new way to perform data center operations that we call Dynamic Appliances. The alpha version will be available in the beginning of November. Dynamic appliances are packaged data center operations like backup, patches or migration that operate within your application. They can be added to applications in mere minutes simply by dragging and dropping them into the application using the AppLogic editor and from then on the application gains that operational capability. For instance, you can give your application the ability to perform it’s own regular offsite backup. Simply drag the backup dynamic appliance into your application, configure a few properties like frequency and number of backups to keep, and you’re done. Now your application will back itself up to DynaVol, S3 or virtually any other storage service.
The way Dynamic appliances work is by leveraging AppLogic’s ability to package distributed applications into manageable entities. Instead of containing large amounts of code to monitor the application’s performance, the appliance uses AppLogic’s monitoring system. That information is used by the appliance to make operational decisions and commands are issued to AppLogic to operate on the application. As an example, consider the SLA appliance which provides the application the ability to manage it’s own resource level in order to maintain performance. The SLA appliance acquires statistics on application performance. If the performance is not within a user specified range the appliance issues commands to AppLogic. The commands can take several forms, but all result in resources being added or subtracted from the application.