Why is Intel Xeon well-suited for driving the cloud?
We sat down with Arash Shokouh, a former IBM engineer and IT specialist, to discuss why Intel’s Xeon line is the dominant force in driving the world’s cloud infrastructure. We also delved a bit into the emergence of ARM’s reach into the server segment, and where this form of processing technology will need to improve to become a competitor.
QuadraNet: What’s the big difference between Xeons and consumer-grade processors?
Shokouh: The main differences between the Xeons and other Intel architecture is that the Xeon is similar the i7 processor, and the i7 right now the fastest, most powerful, multi-core, consumer processor. So the difference between Xeon and i7, even though they perform at the same speed, is the Xeon are designed to run cooler at low voltages, so when you run cooler that means that you use less power for cooling, and when you run at low voltages you’re using less power. So when you have these processors running in servers, and they’re running 24/7, doing some sort of server application, whatever the company is doing, your electricity cost is going to be less.
The other thing is Xeon can be put into multi-socket motherboard. What that means is with the i7, with an i7 you can only have one processor in a computer system. With a Xeon you have many processors within a computer system. So instead of having one i7 you can have four Xeon chips that are in the same system, and they all work together and distribute the load for whatever applications you’re working on. Xeons are designed for server applications, and it’s because of the reasons that I mentioned that they’re so popular. And also, for i7 they’re only up to 4-cores, but with Xeons they’re up to 6-cores. So what that means is when you got a lot of applications or programs running on the same server, they can be shared with those six cores on the Xeons. They run cooler, they run at lower voltages, which means less power consumption and less electricity cost. You can have several Xeon chips in the same motherboard, and you can have more cores per Xeons. So they’re just designed to use in server applications, and that’s why they’ll probably be around for a while and be Intel’s main server product.
QuadraNet: Why is it easier for developers and students to work on Intel’s platform as supposed to competitors from companies like IBM and AMD?
Shokouh: I haven’t worked specifically on AMD’s architecture, but I have experience with Intel’s x86 architecture, which was the original Intel processor architecture and was really popular at the 32-bit architecture. So I can’t really speak to AMD’s, but because Intel has so much documentations out there, and they’ve been out there for such a long time, it’s a lot easier to work with them. For example, if you talk with any students going through an electrical or computer engineering program through college, they probably have at least one or two courses where they had to build an Intel-based system. And so Intel makes all their specs public just like all processors company do, but since it’s been around for so long, colleges and professors tend to use Intel for their classroom projects. The result is you get a lot of engineers that are already very comfortable with the basics of how Intel works. If you get into some of the more advanced courses, I’m only speaking from the San Jose State University perspective–the courses they have at SJSU, when the grad school courses, then you have the option to take things like IBM-based architecture courses where they focus on PowerPC, which is the IBM architecture.
QuadraNet: When selecting a server product, should a business choose a Xeon-based product or something else like an Opteron-based product?
Shokouh: It really depends on the application. So when someone comes to me and says ‘Hey, I’m thinking about building a system that’s either AMD or Intel-based,’ the response would be it really depends on the application. AMD is much better at doing parallel processing; their products (on average) have a lot more cores than the average Intel processors. So if you got a computer that’s mostly used for video processing, and at the same time video editing, photo editing, checking email, doing multiple things at the same time then I would lean more towards AMD, because AMD does a much better job at parallel processing, and does it efficiently. If you want to build a system that’s only dedicated to doing one thing, 100% dedicated to video editing or 100% dedicated to gaming, then I would lean more towards the Intel side. So even though Intel has less cores per chip, they still do a much better job at doing one thing as supposed to several things. So it really depends on the application, AMD for parallel processing, and Intel for more power-house, high-power consumption, very fast applications.
QuadraNet: If the cloud service is meant for the masses, would Xeon be the better choice?
Shokouh: It depends on what the servers are used for. If it’s going to be accessed by thousands of users around the world, and the server is going to be working really hard, then I would lean more towards a power-intensive processor like an Intel Xeon. If it’s a server that’s mostly for in-house, maybe for 10-20 employees to access, then I would lean more towards AMD because it’s not going to be as taxed processing-wise, you’re not going to have as many using it. But for very power-intensive, I would go for Intel, but for basic, in-house server applications I would go for AMD.
QuadraNet: Do you think that ARM’s big step up in the processing world will have a major impact on Intel’s x86 business?
Shokouh: ARM was originally designed for mobile applications. You’re going to find ARM processors in mostly smartphones, tablets, and small laptop computers that needs very low-power, but still needs some decent processing power to it at the same time. So companies have been using ARM more and more in server applications, because even through ARM has been low-power, it’s actually been coming a long way in terms of parallel processing. In the same ARM processor you can have several instructions happening in parallel at the same time, and the reason it’s not being used widely in servers is because it hasn’t gotten to the point where it’s fast enough to handle a server load basically like Intel or AMD. So ARM will, for the near future, probably still be the main processor for smartphones, tablets and small form-factor computers, but it is slowly starting to gain grounds in servers because people are getting tired of IBM processors that are super power hungry, Intel processors that are becoming super expensive and power hungry. AMD and ARM are getting some headways for those reasons, but ARM was originally designed for small devices, not servers, so you won’t likely see it a lot in server applications–at least not yet.
QuadraNet: In terms of ARM’s use in server applications, do you feel that the technology is ready for the leap from mobile to servers?
Shokouh: To be honest with you, I don’t think we’re going to see ARM used widely in servers for a long time, and I’m talking probably 10 years out. Because the whole idea behind ARM is that its meant to be low-power and small footprint, so that it can fit in small devices. You can build a home project server that’s based on something like the Raspberry Pi, an ARM-based device, and it’s small enough and strong enough where you can run things like Linux and you can run servers on it, and do basic things. It can’t be used to share things between many people, but it might be able to do so between 3-4 people. But to answer your question, I honestly don’t think ARM will come anywhere near servers. Right now it’s just used mainly in smartphones and tablets, and maybe a netbook here and there, but beyond that it’s not anywhere near strong enough to be put into server applications.
QuadraNet: Thank you for your time.