In our tests here (more in our benchmark database), AMD’s 3990X would get the crown over Intel’s dual socket offerings. The only thing really keeping me back from giving it is the same reason there was hesitation on the previous page: it doesn’t do enough to differentiate itself from AMD’s own 32-core CPU. Where AMD does win is in that ‘money is less of an issue scenario’, where using a single socket 64 core CPU can help consolidate systems, save power, and save money. Intel’s CPUs have a TDP of 205W each (more if you decide to use the turbo, which we did here), which totals 410W, while AMD maxed out at 280W in our tests. Technically Intel’s 2P has access to more PCIe lanes, but AMD’s PCIe lanes are PCIe 4.0, not PCIe 3.0, and with the right switch can power many more than Intel (if you’re saving 16k, then a switch is peanuts).

We acknowledge that our tests here aren’t in any way a comprehensive test of server level workloads, but for the user base that AMD is aiming for, we’d take the 64 core (or even the 32 core) in most circumstances over two Intel 28 core CPUs, and spend the extra money on memory, storage, or a couple of big fat GPUs.

Aside from the artificial maximum memory limitation – which AMD put in place to protect its own Epyc processors – the 3990X is simply a masterpiece. To be able to get 64 cores and 128 threads for a relatively mere $3990 is nothing short of stunning, and while few of us actually need a processor like that, the 3990X shines like the halo product that it is.

You know, I criticized the last anandtech ryzen review over bias, but I saw none of that here. It’s a good review that sticks to the facts!

There’s no question AMD is doing exceptional work with highly parallel tasks that don’t trigger NUMA bottlenecks. AMD is way out ahead for these types of workloads and way more cost effective than intel to boot. Yet I don’t feel they tried to hide the negatives and they acknowledged intel’s frequency advantages. My only critique is that they should also test under linux.

One of the continual talking points about new CPUs is if the ecosystem is ready for them, especially with AMD pushing core counts ever higher. There’s no point having a million cores if everything is written for a few cores – not everyone runs a thousand copies of the same workload at the same time. Unfortunately this is what happened here with the 3990X. We’re in a situation where only a few software packages (that we tested) work great with the CPU, but it’s also the operating system that’s behind. … I’ve heard a lot of silicon engineers say that adding cores helps, but adding frequency helps everything. The question then becomes whether you target workloads that can scale out (more cores) best, or whether scaling up (more frequency) is a better solution. We either end up with target CPUs for one or the other, or a combination CPU that tries to do both. … For the first stage, the consumer/prosumer level, our conclusion is that the usefulness of the 3990X is limited. Aside from a few select instances (as mentioned, Corona, Blender, NAMD) the 32-core Threadripper for half the price performed on par or with margin. For this market, saving that $2000 between the 64-core and the 32-core can easily net another RTX 2080 Ti for GPU acceleration, and this would probably be the preferred option. Unless you run those specific tests (or ones like it), then go for the 32 core and spend the money elsewhere. Aside from the core count there is little to differentiate the two parts.

All this progress from AMD makes me wonder if typical software publishers will be more inclined to take advantage of massive CPU parallelism in the future. The gut reaction is “duh, yes they will”, but we also need to consider that a lot of the easy parallelism is running on the GPU, which trounces even AMD’s 128thread CPUs at parallelism. I’ve said it before, these make tons of sense for enterprise servers, but my gut feeling it will remain of marginal value for average desktop use cases. The economics & scalability still seems to favor GPUs for graphics and physical simulation.



Anyways, I’d still love to have a 64C/128T CPU just to see what I could do with it. Some workloads can’t use a GPU, last time code compilation was brought up as an example. I’d be very curious to see just how many threads are useful before reaching diminishing returns. 128 threads in parallel would expose a lot of disk & network I/O bottlenecks, heck I’m already hitting these bottlenecks today. Low cost 10gbps, where art thou?

About the core count, there’s no use if the cores can’t be fed at proper speed, hence DDR4-3200 and PCIe 4.0.

Quoting linus tech tips… “So if you want to try this out at home, these cards are actually available on ebay for like $200-$300 a pop and then you’ll pay about $60 for a 3 meter cable like this one.”

I want to be clear that I don’t just want to play with 10gbps connections, I actually need it to permanently replace my current network switches. I’d prefer to get genuine products from an authorized seller with a warranty. Ebay sellers are usually gray market with no warranty and scanning for the best deals on ebay can put you at a greater risk for counterfeits, sometimes the risk is acceptable, but meh. I could put that aside since beggars can’t be choosers. At the very minimum I’d need 2 10gbps nics plus 2 10gbps switches to upgrade because my computers and servers aren’t in the same room (and I have other 1gbps peripherals that I still need to connect too). Unfortunately 10gbps switches with ethernet ports haven’t come down in price. The passive SPF connections that are commonly found are limited to short ranges, long range options are available (including fiber), but the active transceivers aren’t cheap on top of the prices for everything else. The house is already wired for ethernet with 10gbps capable cable, and I’d hate to have to run new cables through walls and floors to reach the other side of the house. One last detail that may sound extraneous but is important to me is that due to limited free PCI lanes in my computer, I want a 4 lane PCIe3 card instead of an older 8 lane PCIe2 card that would incur a performance penalty with insufficient lanes. All the new cards have PCIe 3, they’re just not cheap.

I’ve been watching the market since 2018 at least, but the prices haven’t really been dropping and some have even gone up due to tariffs I believe. I’d be hard pressed to spend what it would take to do it right on top of other more pressing upgrades. Given that the prices aren’t budging, if I could find a good enough deal to save some money with used and/or chinese suppliers, I might give into the risk & compromises and try that, but it’s still a lot of money for something I was expecting to be much cheaper by now.

When you’re already ready to put $4000 in a 64 cores processor, I bet investing a bit more into a genuinely performant network for say, $600, isn’t that much to get the best out of your investment. I mean, Holy Molly, 1.21 gigowatts !!!!

When you’re already ready to put $4000 in a 64 cores processor, I bet investing a bit more into a genuinely performant network for say, $600, isn’t that much to get the best out of your investment. I mean, Holy Molly, 1.21 gigowatts !!!!

I don’t think you realize how expensive this stuff is; $600 total for the LAN upgrade would be very reasonable! However the estimate is way off. The cheaper managed 10gbps ethernet switches tend to be in the $650-$1000 range and well known brands are around $2000. Keep in mind I’ll need two of these to upgrade my LAN and on top of this I still need at least a couple of 10gbps nics for computers.

A much cheaper solution that I had considered is just link aggregation. I already have extra gigabit nics laying around and my switches already support it. I’m not a fan of having to lay down extra copper and running more ethernet cables in parallel. Another caveat is that while linux is capable of bonding any two ethernet adapters, windows requires the drivers to specifically support it. But these are just gripes, I would still do it except I was hoping to have affordable 10gbps options by now, which is much better in every single way. Given that 10gbps has not gotten more affordable, maybe I need to re-evaluate link aggregation, it just sucks that it’s less versatile and a fraction of the bandwidth.

Ive been contemplating adding one of these mikrotik 10gb switches To my Home for a small 10gb network, mostly for between the server and my desktop – but ive simply not found The right excuse just yet