Jump to content

Few questions to the devs regarding game, API and CPU scaling.


Matush

Recommended Posts

I'm directing those questions mainly to the devs but if someone from the community tested/benchmarked things i'm asking here (regarding CPU) then any input/info would be greatly appreciated :)

 

I wasn't sure where to post my questions though - If it should be in 'General Discussion' section or 'General Support' one.

 

I'm lurking here, on the official forums since ~Alpha 13 and i recall that it has been said many times that 'alpha' state is "when the most important/meaningful features are being added to the game" and "beta is when devs are focusing mainly on polishing, optimizing, bug-fixing".

 

In current state of 7 days to die (A16.4) - How many cores/threads does this game use/benefits from? Let's say that i'm going to be playing on a PC with gpu that is hypothetically 5 times faster than GTX1080Ti (to reduce gpu bottleneck as much as possible) - Would i see any difference in performance (in A16.4) between cpus (for example - all from the same generation, @4.5Ghz, same IPC):

 

(cores/threads)

1. 4c/4t,

2. 4c/8t,

3. 6c/12t,

4. 8c/16t,

 

Are all those 4 cpus going to give the same performance in game or maybe for example the only "jump" will be noticeable between 1st and 2nd one (while 2,3 and 4 are going to "spit out" very similar fps)? How things are looking right now and are they going to stay the same or improve for example in A17?

 

As for A17. I was staring into my monitor's screen for literally 5 mins with mouth wide open from happiness (i'm dead serious) when i saw those 2 lines in Developer Diary: Alpha 17:

"Game Engine update to Unity 2017"

"Directx11 and Vulkan support added"

 

VULKAN. YES. How does the performance in 7 days to die compares between DX9, DX11 and Vulkan? Does using Vulkan nets any performance boosts? Anything has been tested in this case or not? Or maybe it's the DX11 that will work with 7 days the best?

 

Regarding DX11, there are some cases/exceptions when turned out that implementing Vulkan doesn't net any better performance and the game still performs better on the previous main API it was using like for example in Dota 2 case:

Similar thing is atm with a game called "The Isle" which already has Vulkan support - Friend with Radeon R9 Fury tested/compared DX11 vs Vulkan API and turned out that he was getting better performance on DX11 (while Vulkan at this point generally performs really good on AMD cards).

 

On the other hand, once Mad Max on Linux received Vulkan support - the performance basically doubled-tripled over OpenGL (the gpu in this benchmark was GTX 980Ti):

 

Here's another incredible example. As we know AMD's rx580 8gb competes directly with Nvidia's GTX 1060 6gb. In Wolfenstein II on Vulkan, overclocked rx580 goes waaay above 1060, past 1070 and lands near GTX 1070 Ti performance lvl (which is ~10% slower than GTX 1080):

 

Wolfenstein II thanks to Vulkan is able to utilize even 16 threads of a cpu O_O: Link

 

All of this are just examples of the benefits that Vulkan can give. Wolfenstein II is an exception and it only supports Vulkan, no other APIs - i just used it to refer to "what happens when devs are focusing on such incredible API". Nvidia's next generation of GPUs (Ampere/Volta) is also rumored to have significantly improved low lvl API support (DX12 and Vulkan) over current generation (Pascal), so Vulkan = lengthened usefulness of Nvidia/AMD gpus.

 

Here's a good example of that. Radeon HD 7970 3GB (released in Q4 2011) thanks to very good low lvl API support is able to maintain ~100fps on high/highest settings in Doom 2016 (on Vulkan) during 'action on screen':

There's 5 years difference between that gpu and that game O_O. Usually after that many years most even high-end gpus are "almost dead" in newest games.

 

So overall how things are looking with Vulkan in 7 days to die? It can greatly improve performance if properly implemented (+ less negative reviews on steam from people complaining about performance). What about that CPU scaling? Is it something that will be "touched"/focused on in Alpha or Beta stage? Better multicore support would maybe allow to 'turn up' few things here and there like more zombies "on the screen"?

 

Another question. If i recall correctly - Jake from State Farm, ummmm i mean Mad Mole ;) - talked once about some interesting feature that may or may not come, it was something like this: big zombie hordes that would roam the whole map and the whole thing would be somehow simulated off screen. Maybe Vulkan, better multicore support would allow for something like this?

 

I realise that optimizing the game regarding multicore/Vulkan support isn't easy. I'm just wondering if that's the area of focus for TFP and if it's on their roadmap :)

 

We're finally entering an era where high core/thread count cpus are becoming more and more affordable for average consumers. No more intel's monopoly with mainstream 4 core cpus with +5% performance boosts every generation like it was during 2011-2017. That multicore "era" started around a year ago when AMD has "Ryzen from the Ashes of the Singularity" and came back with an ~300-350$ 8 core cpu that offered very similar performance to 1000$ 8 core i7 6900k and now a lot of people are buying/upgrading to ~200$ awesome mid-end cpus like 6c/6t i5 8400 and 6c/12t r5 1600.

 

Also, Nvidia ended driver support for 32bit systems: Link

Does this mean that 7 days to die will finally "spread its wings" by going full 64bit only, or devs are still planning to support 32bit?

 

Regards to The Fun Pimps and the whole community :)

Link to comment
Share on other sites

I believe Vulcan is an open source rendering engine, intended as a sort of replacement / improvement upon OpenGL.

 

I've played at least one game that utilizes it, that game being Doom (2016); it performed well, better than the default renderer (I don't remember whether it was OpenGL or DirectX) in my experience.

Link to comment
Share on other sites

Since the majority of the processing for the game is on the CPU, a feature like Vulkan is going to have very little effect. I'd go as far to say that unless your GPU is a total potato, it's not going to have any effect at all.

 

Additionally 7D switched to 64-bit a few alphas ago, but still does have the crutch available for those that require it. Personally though, 32-bit support should have ended for everyone years ago. The only reason it still exists is because of stagnant developers and the questionable practices of large corporations selling craptops.

 

In a16, a 32-bit system does not even meet the min spec required for the game.

 

Also on the multi-core/intel thing. They stepped away from that a few years earlier, with CPU's like the i7-3930k. Not only does it overclock like a beast, but with 6/12 cores it can still keep up with the most current CPU's. (Overclocked to almost 4.5Ghz, it benches slightly higher than a i7-7700K) For some stupid reason they went back to the old design of 4-core overclocked again.

Link to comment
Share on other sites

I still have a bit of unusual system, 1080ti and 2500k :) (4.4ghz), and I can still just about scrape with 64 zombies, but FPS dropping into 10's occasionally during late horde nights, and hovering mostly in 20's, so not exactly playable, but I still prefer it to lower zombie numbers, as I enjoy them destructing the world and "threat" which they pose, more than choppiness.

 

GPU is not utilized to 100% from what I remember (did not pay attention of a month or two, but was not concerned on GPU side at all), with the game set to max.

 

4 threads are definitely used, but I do not know whether more are, and we are most certainly CPU limited.

 

Would be interesting to find out how are others doing in a similar scenario with 4/8, 6, 6/12 and more core/thread CPU's. ie how many cores at least the game utilizes and what effect does it have on FPS at Max zombie numbers and around D49+.

 

I have delayed updating the CPU this autumn, waiting for Zen+, and should finally pull the trigger in the spring. RAM prices are not helping.

Link to comment
Share on other sites

Since the majority of the processing for the game is on the CPU, a feature like Vulkan is going to have very little effect. I'd go as far to say that unless your GPU is a total potato, it's not going to have any effect at all.

 

Additionally 7D switched to 64-bit a few alphas ago, but still does have the crutch available for those that require it. Personally though, 32-bit support should have ended for everyone years ago. The only reason it still exists is because of stagnant developers and the questionable practices of large corporations selling craptops.

 

In a16, a 32-bit system does not even meet the min spec required for the game.

 

Also on the multi-core/intel thing. They stepped away from that a few years earlier, with CPU's like the i7-3930k. Not only does it overclock like a beast, but with 6/12 cores it can still keep up with the most current CPU's. (Overclocked to almost 4.5Ghz, it benches slightly higher than a i7-7700K) For some stupid reason they went back to the old design of 4-core overclocked again.

 

Hmmmm, i have heard "here" and "there" that one of the Vulkan's advantages is that its proper implementation allows games to be "aware" of any additional cores/threads on a cpu which could indicate that the more cores/threads somebody's cpu has - the more fluid gaming experience could be (higher avg fps, a lot less sudden fps drops, no micro-stutter etc). Some people are saying that Wolfenstein II is a good example of that - as the link i posted above (about how game is using even 16 threads) shows. That game is a best case scenario and was build with only Vulkan in mind though (there's no DX9,10,11 or 12 support in it). I'm not 100% sure how Vulkan will affect (unity) 7 days to die, but here are some more interesting bits&pieces of infos regarding Vulkan on Unity i found so far:

 

Quotes:

"However, we’re seeing large performance gains even when running the renderer in a single thread. In one of our internal benchmarks we’re seeing up to 35% improvement in frame times on Android, compared to OpenGL ES 3.1 renderer, even though they’re both running in a single rendering thread!" Source

 

"The Unity developers claim they've seen a 30-60% rendering performance improvement out of the box, just by using Vulkan." Source

 

As for intel and multicore cpus. I'm gonna go a bit off-topic here about "why we had only 4c/8t cpus from intel for so long" and later about "benefits of multicore optimization in games" - which technically also counts for 7 days to die ;)

 

Looking at what has been going on in PC hardware segment in this decade - back then when AMD failed with their FX cpus series (around 2011/2012) - intel saw that competition basically ceased to exist and decided to milk everybody with those 4 core chips and +5% performance improvements every generation for years. There wasn't any point in bringing any cpus with more than 4 cores and bigger performance boosts to the mainstream since people didn't had any choice in cpu purchase - there was only one option - intel. In result the most mainstream ones were 4c/4t i5s and 4c/8t i7s. If those cpus were the most popular among gamers then what would be the point for various game dev companies to optimize their games so that they could benefit from more than 4 cores and 8 threads? That would be a waste of resources. There was no point in paying hefty premium for any 6 core intel chips (if somebody was focusing purely on gaming) since games weren't benefiting from those additional cores/threads (very tiny percentage of PC gamers had 6 core cpus or better).

 

Those "new" generations of cpus intel was releasing were barely any different from each other - that's why almost every 'core' series cpu released after 2011/2012 is still quite good today. Here, check out IPC (Instructions per cycle) improvements - clock for clock comparison of the few last generations: Link. The funny thing is that Kabylake (7th gen, 7600k,7700k) is just refreshed/oc'd Skylake (6th gen, 6600k,6700k etc) - there are no IPC improvements at all. 8th gen Coffeelake cpus like for example 6c/12t 8700k is known as "7700k +2c/4t" which means that there are also no IPC gains: Link1, Link2, Source

Basically, in short, in terms of single core performance:

3Ghz Skylake = 3Ghz Kabylake = 3Ghz Coffeelake

 

Right now i'm on 4c/4t Sandy Bridge 2500k oc'd @4.6/4.8Ghz (for ~6 years now) which (according to cpu-z test) in "raw cpu processing power" comes very close to Kabylake 4c/4t i5 7400/7500. 6 core Coffeelake looks like it's the first generation that is finally quite decent (that would be worthy upgrading to from my 2500k, but i'm more interested in next zen gens) and offers a nice performance jump, but the fact that 3Ghz Coffeelake is only like 20-25% faster (in IPC, single core perf) than 3Ghz Sandy Bridge is just depressing, especially when we consider that there's ~7 years gap between these two generations. Both, CL and SB can also be oc'd to ~5Ghz area.

 

Here comes AMD, the company that everybody laughed at for several last years. After their FX series failure they decided to throw majority of their resources at writing complete new architecture from the scratch (otherwise they would go bankrupt). It took them 3-4 years to do that and 'zen' (Ryzen) is the result. They managed to do 52% IPC uplift from their last Excavator FX generation. At this point they're very close to intel's single core performance clock for clock: Link. They only need to increase their clocks above 4GHz since current Ryzen has oc wall around that point and upcoming 'zen+' gen (which will appear in like ~2 months) is directed to considerably increase overclockability of zen overall.

 

What makes it even more amazing is when we compare r&d (research & development expenses/budget) of AMD, Intel and Nvidia: Link. It's just incredible how AMD still manages to keep up/compete with 2 a lot bigger companies at the same time O_O.

 

The sad thing is that we could have today so much faster and cheaper cpus and gpus if intel didn't decided to play dirty in the past. This is how things looked in mid 2000s:

 

A review from 2004:

Athlon 64 3200+ (2.0Ghz) matches/beats Pentium 4 560 (3.6Ghz) in games: Link

 

Similar thing was 1-2 years later with first dual cores Athlon 64 x2 vs Pentium D.

AMD cpus were still 20-30% faster in games than Intel ones: Link1, Link2

 

Even in power consumption, back then Intel cpus were consuming over 100 Watts more than AMD ones: Link

 

Everything was going very good for AMD then why they went downhill so much that they almost went bankrupt? This happened: Link1,

 

This is what was going on a year ago at AMD's Ryzen release: Link1, Link2, Link3

 

Intel was in full panic mode since they weren't expecting that AMD will come back with a clear strike in price to performance: Link

They even rushed Coffee lake launch out of panic by 3-6 months while originally it should be around Q1/2 2018: Link. Here are more reasons why intel was panicking: Link1, Link2+Link3.

 

Here's the power consumption of AMD 16 core 32 thread 1950x Threadripper (released around half a year ago) vs intel's 10 core 20 thread i9 7900x. Yep, it's a comparison of 10 core intel cpu vs 16 core AMD cpu: Link. What makes it even more interesting is when we look at "raw total cpu processing power" of current top high end AMD cpus vs Intel ones: Link, Source. Not to mention that both cpus - Intel's 10 core i9-7900X and AMD's 16 core 1950X - are priced very similarly.

 

End of part 1/2 of my post.

Link to comment
Share on other sites

Part 2/2 of my post.

 

Also, it's sad that intel recently started to change their motherboards more frequently than some people their underwear. 2xx series of motherboards (for Kabylake like Z270 for 7600k, 7700k) was "alive" only for around 10 months since January 2017 (KL release) till October 2017 CL rushed release. Coffeelake motherboards (Z370 etc) don't support any Kabylake/Skylake cpus while AMD stated that current am4 motherboards for Ryzen (zen) like B350,X370 are also going to support zen+, zen2 (2019) and zen3 (2020) + all zen cpus can be oc'd on any mid-end B350 (70-120$) motherboard.

 

This video of Steve Jobs talking about Xerox company fits perfectly to what was going on with intel in 2017. Basically it's like he's talking about intel: Link

 

They also managed to piss off a lot of their users:

Link1,

+
+

 

Going back to multicore cpus in gaming. So yeah, earlier there wasn't any point in optimizing games so that they could benefit from more than 4 core cpus. Now? AMD released 6c/12t and 8c/16t Ryzen, Intel released 6c/12t Coffee lake and *something* is already starting:

 

 

Star Citizen is going to benefit from more cores/threads

 

Assassin's Creed Origins also benefits from 16 thread cpus

 

Prey 2017 (the gpu in this test was GTX 1080 Ti FE)

 

According to tests/benchmarks conducted by gamegpu.com - quote:

"Prey uses to 10 computing threads. But fully uses 6 CPU cores."

 

Space Engineers (Multithreaded Physics update)

 

Avorion (Multithreading Update)

"In short: The server and client will both run a lot smoother on machines with more CPU cores."

 

Forza Horizon 3 (overall multithread update)

 

4c/4t cpus - no matter if they can be oc'd to 5Ghz or not - they shouldn't be worth more than 150-160$ at this point - All of them. This what i mean:

 

Here's a comparison of cpu usage between 6c/12t r5 1600 and 4c/4t 7600k

 

This is when 4c/4t cpus got pushed to low end segment

 

And this is what happens when all cores/threads of a cpu are going up to 100% usage (micro stutter and fps drops)

 

The usual "you don't need more than 4 cores for gaming" talk is is dying right now, especially if somebody aims at fluid high fps gaming.

 

Here are few more interesting infos regarding AMD's zen:

 

r5 1600/x @4Ghz edges out 6800k @4.2Ghz in cinebench and matches it in handbrake:

. Even though Ryzen is a bit behind in IPC - those results are indicating that Ryzen's SMT is more efficient than Intel's HT.

 

Fast ram with low latency can net some surprising performance boosts on current Ryzen. Due to differences between 'core' and 'Ryzen' architectures - it affects Ryzen (Infinity Fabric) more than intel cpus. It's worth to keep in mind that there's a clock difference of 1.1Ghz between Ryzen 7 @3.9Ghz and 7700k@5Ghz in this test:

 

Cheers ;)

Wow, i had to split my post in two ^^.

Link to comment
Share on other sites

  • 1 month later...

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...