Interview with Intel’s Dan Ragland, Head of Overclocking: Tuning Alder Lake For Performance
[ad_1]
The topic of overclocking has been an interesting one to track over the years. Over a decade ago, when dealing with 2-4 core processors, an effective overclock gave a substantial performance uplift, often allowing a $200 processor to perform like the one that cost $999. However, core counts have increased over the last couple of years, but also companies like Intel are getting better at understanding their silicon, and are able to ship it out of the box almost at the silicon limit anyway. So what use is overclocking? We turned to Dan Ragland, who runs Intel’s Overclocking Lab in Hillsboro, Oregon, to find out what overclocking now means for Intel, what it means for Alder Lake, but also how Intel is going to approach overclocking in the future.
Dan is a 24-year veteran, a true Intel lifer. While some employees join Intel after college, Dan actually delayed his degree due to being contracted by Intel initially then becoming a full time blue-badge employee. His background in overclocking, like most of us, came from a love of gaming, and the need to get the best out what we could afford. Dan’s benefit over those years has to be that he’s been closer than almost everyone else as to how overclocking actually affects the silicon, but also how different tuning methods actually affect performance. It’s all very easy here at AnandTech to talk about voltage and frequency, whereas Dan and his team will work with the architects to develop new functionality that benefits both regular products but also all the features the overclocking models will have at hand. Having been a competitive overclocker myself, I’ve always known Dan at a distance, but ever since he recently made the leap to being the face for Intel’s product announcements on overclocking, I used my recent trip to Portland to say hello and get Dan for a face-to-face interview about the state of play with modern overclocking techniques and interest.
Dan Ragland Overclocking Engineering Lead Intel | Ian Cutress AnandTech |
Ian Cutress: You’ve been at Intel for 24 years, but you didn’t start at the OC Lab, because it really wasn’t a thing back then. Could you describe your journey through Intel?
Dan Ragland: I started when I was 20 years old as a contractor, actually, and I was going to do a six-month job at Intel working on the motherboards that Intel produced at the time – Intel branded motherboards – and that went well. I was like, you know, this is a lot of fun, I slowed down my completion of college, and I was able to join as a technician back in 1998. I worked on motherboards for a number of years, later on finishing my degree, and working in technical marketing for motherboards for a couple of years. Then I went on to take on a role to start with we called the Mobile Halo, which was our first ever mobile overclockable parts, and our gaming line-up that we have today. Throughout this whole time, overclocking was kind of a side project, so then after that mobile role for seven years, I came back to desktop, and I was blessed with being able to actually start this team that we have here in this form, this lab space.
IC: So when did the OC Lab really start as being a fundamental aspect of Intel’s product line? I know you mentioned the Halo Mobile parts, but when did it really take off from being a one or two person side-thing into being big? Beyond that, does the OC lab have an overarching mission?
DR: Yes, so first part of your question, people have worked on overclocking for a while – it’s really been a passion project for a lot of people, so I don’t want to discount the number of employees over the years that just contributed organically to overclocking, and there were a lot. But as a formal lab here, around 2014/2015 is when we formed the team, and started growing it from there. And yeah, I guess our mission is to provide an amazing experience to people who are interested in tuning the performance or overclocking their processor. So we want to give them controls, capabilities, everything possible to make that a good experience.
IC: One of the questions I get every now and again, is that Intel will sell you a chip that overclocks, but overclocking voids the warranty. Why would Intel offer a feature that arguably voids the warranty on the product that they’re selling to end-users? How do you marry that with having an overclocking presence lab here at Intel?
DR: Yeah, that’s a great question. So you know, Intel, it’s a big company, and the top producer of processors worldwide. We want to be rock solid, stable, and reliable, and we want everyone to know they can expect that from us. So when it comes to overclocking, we do want to distinguish that there are separate risks that come from it. So you can buy a rock solid product, or you can buy an overclockable part and choose to opt in to overclock it, and we do want to be really upfront that there are some risks that come with that. Now gathering the number of people that overclock and the amount of overclocking out there, I think there are plenty of people who are willing to to take that risk and push performance to that next level. So that’s really where we’re focused on the overclocking side.
IC: How much experience do you have of overclocking before the OC Lab got into where it is today? My background is extreme overclocking and being in the overclocking leagues, but you obviously can’t compete in those because you’re an employee of Intel!
DR: So for my background in overclocking, it has never been more on the extreme world record-setting. It’s always been about buying Intel’s best product and pushing it. Truly, before I worked at Intel, I wanted to push the best and make it better. So as a high school kid, I worked for a small computer shop, and we built clone machines, so I learned in those early days about buying that 33 MHz processor and pushing it to 50 MHz with a jumper setting or a crystal swap out. That’s kind of where I got into it.
Then, inside Intel, we’re always tuning knobs, and adjusting things. We had control of the ratio – at one point, something called ‘re-marking’ was going on in the market, and it was not a good thing, as third parties would modify the label on the processor, and indicate that it was that 50 MHz instead of the 33 MHz. We needed a way to prevent that. So again, enabling a rock-solid, stable, reliable processor, people could count on us. So we’d lock the ratio, and at that point, before K SKU existed, the only form overclocking would have would be base clock overclocking. It was at that point that a group of passion-project volunteers inside of Intel started discussing how could we keep things rock solid and stable, but allow people to overclock. That’s where the Extreme Edition, then K, came into being.
I got to be there, as kind of one of the junior guys, as the Extreme Edition was being formed. I was part of the overclocking team via the Intel motherboards at the time for that project. Then I was more on the mobile side when K SKU was created, so my predecessor was working on that with marketing and came up with the K SKU strategy. then I re-joined shortly thereafter, and I’ve been taking it from there.
IC: So what stage does the OC Lab get involved in next-generation overclocking, and defining next-generation features? Is it pre-silicon, or are we talking about when the silicon is essentially baked and you’re just deciding how to optimize it further?
DR: We get involved very early. Ball-park, about five years before a product comes to market is when we get involved. So we’re in the early planning stages. All the different features that are discussed for a new processor, we’re in there, and we will have overclocking line items. We call it the Overclocking Domain – we make sure that we know about new subsystems and that we’re developing overclocking requirements to make sure we are covered. You can’t put overclocking on cruise control and just say, ‘hey, you’ve got the multiplier, you got voltage, you’re done’ – it requires continual cultivation to keep it moving forward.
IC: So if you’re involved three to five years out, and I was to mention Meteor Lake, Lunar Lake, all those names that we may have heard of. Am I getting the right sense of timescale there?
DR: The overclocking team is involved in projects that you didn’t even name!
IC: Is the OC Lab solely involved on the CPU overclocking side? Intel has a wide array of products, some of which are overclockable.
DR: We are in touch with our counterparts in other parts of the company, including the graphics world. We act as one Intel, and we share back and forth, but at this stage, we’re operating kind of in parallel, but together.
Dan and a couple of members of the OC team
IC: At some level, overclocking could be considered the same as binning hardware. Intel has spoken about the general variability of silicon and different voltage/frequency curves from manufacturing. So to what level do you guys get involved in deciding what bins are correct? For example, for what works at retail compared to what the headroom for overclocking is? We see a lot of people saying that Intel CPUs no longer have any headroom because they are binned to the max. That’s been a feature of recent generations – do you have much to say on this?
DR: Our number one priority is to deliver that out-of-the-box performance – the best we possibly can give to consumers. A rule of thumb is we do not withhold performance to leave it for overclocking headroom. So if we do find a performance opportunity to have a processor that gives you that best gaming experience, the best creation experience, we’re going to do that and not hold it back for overclocking. At that point, our job as the overclocking team would be to find that headroom. I say we leave no stone unturned – we look at every subsystem, every area, and find every last ounce of performance within those defined SKUs.
IC: To put you on a spot for the second – when Alder Lake launched, we had the 12900K – I remember in the presentations you saying that the one-click overclocking allows plus 100 MHz on the P-cores and plus 300 MHz on the E-cores. Would you consider that performance left on the table for the productization of that SKU? Or was that kind of performance something you found after?
DR: Yeah, so good question. Performance comes in many areas, so when you’re defining a product, you tend to have a power threshold, including current and thermals. You have to test at what voltage do you need to run at those frequencies. So those are some of the things that kind of define performance. In that example you gave, that is indeed overclocking headroom, and you’ll see that on a very large distribution of parts that can achieve that. But no, it was not left on the table.
IC: Have you ever overclocked hardware that didn’t make it to market?
DR: Yeah, we love to experiment. You name it, we’ve tried to overclock it! All sorts of different product categories. We talked about notebooks and desktops, and there have been other things we’ve looked at overclocking. I don’t mean to imply that we’ve checked out everything that Intel’s ever done, but when a new product category comes to light, we look at it, and ask if there is headroom. We ask if there a market for overclocking. We ask if consumers would want this, or would there be a value to it. I’ll give you one example – I did some pioneering work on SSD overclocking.
Actually, there is a lot of headroom to be had, at least when we looked at this in SSD overclocking. Because like a CPU, they have their own power limits, and depending on whether can you cool it or what environment are you in, there is headroom. But we evaluated that, and wondered if the data was at risk – we surveyed the market and ultimately made the decision to not pursue that in earnest. It’s something we always keep our mind open to, but at the time that was the decision.
IC: On the other side, have you ever been tasked with overclocking something that you personally have thought ‘why would I ever want to overclock this’?
DR: Sometimes I don’t think that initially – any opportunity to overclock something new, let’s check it out. Then as you get into it, you look at some of the trade-offs, and you might realize that maybe the scope is very narrow for who would benefit from that. I hate to close any door on anyone though, even if it is a narrow opportunity. But yeah, those things might happen. I might later think then maybe that’s not the right direction.
IC: Should we bundle the term underclocking in with overclocking? Arguably a bigger portion of people are more interested in underclocking, making sure the processor is more efficient for a small form factor or for server installation. Do you do much on that front?
DR: I guess if you call that kind of performance tuning, or performance optimization, we’re kind of under that same umbrella. We do have as part of our overclocking toolkit the ability to undervolt or reduce the voltage, so yeah, I think that’s a fair assessment.
IC: Perhaps ‘Performance Tuning Lab’ doesn’t have the same ring?
DR: Exactly!
Extreme OC at Intel Innovation
IC: So back in the day, Intel used to organize extreme overclocking events. We’re sitting here with liquid nitrogen behind us, and you’ve shown me chiller lines running through your lab just for consistent performance. Are there any plans in the future to do extreme overclocking events?
DR: There’s nothing I can commit to right now(!), but yes, we’re actually actively involved in the community. We sponsor places like HWBot, and are trying to do our part to keep the community healthy and invigorated, as well as attract new people to the market. People who grew up with overclocking, it might have been more organic and natural to create those opportunities. We are involved locally here with the university, and Oregon State University has an Overclocking Club. We’ve hosted them here a number of times, and we support them with some sample units to do their testing, so that they get that experience.
IC: So I’ve noticed that with some of Intel’s product launches, you first get external overclockers on the camera to do their work, rather than members of the internal OC team. No doubt your team has a big hand in helping them, but do you use external overclockers because those people are known in the community and have notoriety? Or perhaps the people here don’t necessarily want to be in front of the cameras?
DR: I’m glad you asked that question, because we feel strongly that it’s our job to create something for people to overclock themselves. It would be almost unfair for Intel’s internal overclocking team to come in and just smash all the records. You kind of steal that joy if you do that. So we do want to always work with the external ecosystem, and we want things to be attainable by overclockers – both for the everyday overclocker, and for those extreme overclockers.
IC: Is it because, arguably, you have more control over the tuning that you can do internally versus what people can do externally?
DR: Absolutely. I mean, it would be unfair for us to compete. The kinds of things that we could do here, we can have essentially a spinal tap to the processor and have more knobs than you could ever imagine. And that really would be undetectable, so it would simply be unfair for us to compete.
IC: So given that the world CPU frequency record is something like 8828 MHz, are you saying that you may have gone beyond that internally?
DR: Well, I’ll say this. We could have produced screenshots and validations at very, very high frequencies on 12th Gen. We did not.
IC: In the past, both you and your competitors have had unique overclocking processors that you seed to 100-120 people. I believe Intel in back in the day used to call them Black Ops, and AMD had their TWKRs back in the Phenom II days. Is that still something that there’s interest in?
DR: Well, that sounds like fun, but I’m not allowed to discuss potential future products.
The 5.0 GHz all-core, 14-core, overclockable, Intel i9-9990XE, from 2019. It was auction only from Intel
IC: In the overclocking community, there are stories of how individuals will buy 500 units to find that one unit that’s great for all their overclocking, and their benchmarks and their results, trying to get world number one. With what you guys do here, can you put that 500 number into context? I assume you guys go through more than 500?
DR: Yeah, we have. Though generally it doesn’t take that many to get a statistically significant number of good units. We know the overall distribution – we have more data than everybody else, so we know what is a good sample. We can pick the highs, the lows, the in-betweens, and test with a smaller number of units and get that same information. But we’re still talking a large number, and it’s not uncommon to test 50 to 100 units. But generally not 500+!
IC: So how much of your day is ensuring that the ones on the low end of that distribution are still worthwhile for end-users?
DR: We definitely factor that in. So for example with the Intel Speed Optimizer as part of the Xtreme Tuning Utility (XTU), we have to create a path that works for those lesser units and those extreme units.
IC: How often do you take competitor’s products and overclock them in a sort of like a competitive analysis type situation? For context, I can imagine Intel taking something like the Apple M1 and finding ways in which you can tweak that, and finding out how locked down it is. I’m not saying specifically the M1, but would that sort of thing be done here?
DR: I would just say that we believe it’s important to understand the competitive landscape. We want to know where our opportunities to grow are, but also where our strengths are. So we do keep a keen eye out for those, and we take it seriously. I mean on our team we want to win, we want to be number one, and so we leave no stone unturned, whether it’s internal, or beyond looking at what the opportunities are.
IC: What’s the most underrated or obscure overclocking feature that people need to talk more about?
DR: I’ll give you two.
The first one is per-core overclocking. People are starting to develop awareness about that. That’s been out for probably four years now, and more attention should be paid there. I think that thanks to the community, we’re seeing a little more attention there. Per core overclocking is really important now that there’s a big diversity in the capability of each core, as I’m sure you’re well aware – we have features like Turbo Boost Max 3.0, also known as favored core, that kind of highlight that.
The second one is something newer. We call it ‘Overclocking Thermal Velocity Boost’ and in hindsight, we probably should have given a different name because it’s similar to Thermal Velocity Boost, but it’s very different. It allows the user to define the frequency they will achieve at various temperature points, and the principle there is that cold silicon can achieve higher frequencies and do so at a lower voltage. That’s the application there. That’s really only been around for a couple of years now, and I think that’s one folks should look at.
IC: Right now, Intel’s consumer line up to 125 Watts on base power, or 250+ Watts on Turbo Power. Intel hasn’t done a high-end desktop part in a few generations, and we’re seeing other CPUs in the market at 280 W. Is there a point where perhaps Intel offers an overclockable CPU, but mandates liquid cooling only? Do you think Intel would ever put that restriction on the market?
DR: I wouldn’t rule anything out. I think the market could handle something like that, but I wouldn’t be able to comment.
IC: How overclocked is your system at home?
DR: It is overclocked, probably 20%? Well, I have a Comet Lake, but I still run the old X99 Haswell-E generation. The X99 one, that one I have overclocked about 25%. You know, it has its own purpose, but I had been reusing it recently and there’s a story behind it.
Some workloads, as you know, scale linearly with frequency. My daughter has lots of DVDs, and we wanted to put those on a tablet for viewing on a road trip. So for that job, the conversion scales linearly with frequency, and when I overclock by 25%, the job gets done roughly 25% faster. So as I’m going through that stack of DVDs, that was a great application I had there.
In the early days, I was more into that gaming scene, and I still love catering to gamers. It’s like once somebody is a quarterback, they’re always going to have a love for that sport. So I’ll always love gaming and overclocking to tune for gaming. But these days, my focus is more on creation.
IC: Should Intel label its overclocking GPUs with the letter K? In your personal opinion, is there brand identity in that K? Or do you think there’s an opportunity for something a bit more nuanced or novel? Do you think overclocking necessarily matters in the GPU space, because often we deal with base and turbo, and there’s a lot of fluctuations within there.
DR: Well, my personal opinion is overclocking definitely matters even for GPU, and there’s definitely interest there. It actually goes hand-in-hand with the same systems that we overclock the processor on. Regarding the K SKU and using K for graphics, I mean that’s really more of a marketing call, but I don’t think anyone would be upset if that was done [laughs].
IC: To extend on the GPU front, we’re seeing the HPC type GPUs from multiple vendors go from anywhere from say, 250 W all the way up to 500W+. Are you involved in any way in that?
DR: No.
IC: Do you wish you were?
DR: I would love to be part of that as well. Though we have plenty to do here with the PC, CPU overclocking, platform overclocking.
IC: Where’s the next overclocking revolution going to come from? Is it CPU, GPU, or a specific feature? Perhaps on the system level, we could have completely redesigned motherboards that make it better to overclock or better thermal dissipation? You obviously have insight into several years in the future here, hence why I’m asking!
DR: There are a couple we could talk about.
There’s been a lot of technology development in the area of cooling and sub-ambient cooling, and you’ve seen Intel’s Cryo Cooling Technology. That’s really been kind of at the early stage, and we’re just trying to incubate something to see if you could actually cool the processor at sub-ambient, and what you could get out of it. So I think there may be an opportunity there, especially as efficiency improves, and as the technology improves. So keep an eye on that one – a sub-ambient cooled processor should really lift the overclocking capabilities and top frequencies.
Then in the era of memory overclocking, there’s a lot of opportunity there. You’ve already seen DDR5, but have you seen the headroom with DDR5 without any crazy cooling? From a 4800 MT/s specification, with a couple of modules of memory, we’re seeing overclocks to 6000+. That is very attainable by an average overclock, or you can just go buy memory at that speed.
So I also think about user experience improvements, and there’s a big opportunity there – how can we make overclocking easier for the user? So those are maybe three areas that I think right now are ripe for further exploration and development.
IC: Internally when it comes to memory frequency, are you saying MHz or MT/s? I noticed you’re being very explicit when speaking to me, at least!
DR: I personally have always tried to use MT/s. You know it’s confusing for the consumer sometimes, because some of the software tools still report in MHz (Ian: All the software tools!). So we’re trying to work that out to make sure MT/s is the proper terminology and explain that it is double data rate, and that’s why you take MHz and multiply it by two.
IC: Do we need more memory channels in consumer platforms?
DR: I think consumers would love all the memory channels you could give them. Do they need it? Probably not – maybe content creators as they tend to like more channels.
IC: So what I’m hearing is that Intel will re-enter the high-end desktop market?
DR: No comment on our future roadmap!
Many thanks to Dan Ragland and the Intel team for their time.
Also thanks to Gavin Bonshor for transcription.
[ad_2]
Source link