Jump to content

Naz

Members
  • Posts

    305
  • Joined

  • Last visited

  • Days Won

    5

Posts posted by Naz

  1. Usually decaff coffee and water, or beer with very occasionally shots of what ever spirits i have lying around. Never been a fan of eating at the pc, especially sticky or greasy food. When i was a kid and you have pizza and are taking turns playing RE4, no one wants to be handed a controller covered in grease😄so that's always stuck with me. Although very very occasionally i'll eat something that can be eaten without touching the food, like sweets from the packet etc. Maybe i'm just a weirdo😂

  2. 3 hours ago, Vampirenostra said:

    any tips to try affinity with Macs?)))

    Unfortunetly I haven't used a mac since like 2001, so I have no idea how you would do it on a mac. Looking at Google seems to suggest it is possible, but might require some c++ or python. However for all I know it could be done as easily as Windows, I just don't have any experience in mac os. 

  3. 12 minutes ago, samljer said:

     

    I Have an i7-6700K@Stock (summer and all) and a GTX1080 w 32GB ram.

    Monitor is a 1440p Ultrawide with 144hz refresh. specifically this one

    LG 34GK950F-B 34" 21:9 Ultragear WQHD Nano IPS Curved Gaming Monitor

     

    My frame rates are 90-110 out and about, while shooting, etc.

    40-50 horde night (still above the freesync 2 minimum so still buttery smooth)

     

    Basically not at all limiting if you turn a few settings to Medium...

    specifically shadows. turning that up 1 notch tanks the FPS by half in each category.

    Yes settings matter when talking about performance. I usually test 2 sets of settings, everything cranked and one with lower settings in key areas. Reflections will tank performance the most , quality, reflective shadows and screen space reflections can severally impact your performance. shadows is next but not as bad, then options like tree quality is an odd one as it seems to control how close high quality block models start rendering, set it to low and you'll see trash piles popping in everywhere 😛 but it can help to turn it down to medium or high.

    But yes the discussion was about being GPU limited not fps limited, the distinction being if you're gpu limited it means there is a bottleneck somewhere in your system and you're not getting the gpus full performance. This doesn't mean it will be a bad experience, it could still give perfectly playable framerates with settings your happy with, if that's the case then you can just enjoy the game and not worry about it. However if you're not satisfied with your systems current performance, step 1 is to find which component is the weakest link and if you already have a powerful gpu it's usually the cpu.

  4. 3 hours ago, Darklegend222 said:

    Probably answered in an earlier page, so indulge someone inferior..

     

    May i ask what cpu you tested this on? I'm currently using an i9-10850k processor and am curious if it would be limiting at 1440p? I've got a 6700xt soon to be put in the rig so i assume I'll be cpu bound.. but by how much is my question?

    I've tested the ryzen 3950x, i7 6700k, i5 4670k and am now testing the ryzen 5950x

    The 3090 tests were done on the 3950x, which is a touch slower than your cpu (In 7dtd) I haven't personally tested the i7 10850k or the 6700xt, so I can only estimate. However I think your cpu should be able to keep a 6700xt fed at 1440p, I wouldn't imagine it would run at lower than 80-90% gpu usage. 

     

    But that's just my guess based on benchmarks on those components, when you throw it in there let us know how you got on👍

  5. 22 minutes ago, Beelzybub said:

    @Naz@The_Great_Sephiroth

    So, let's say we lived in an alternate universe where people could actually buy GPUs.

    I install a RTX 3080 FE, just like @McGee9899 has. According to your statements,

    my i5 3550 would get a better frame rate in 7 Days to Die than his Ryzen 3700x.

     

    Every benchmark I can find has a Ryzen 3700x having better single core and multi core

    performance than my i5 3550.

    Except in 7 Days to Die? I'm really skeptical.

    Yes out of the box you would get better performance because your cpu is already running the game with 4 cores since your i5 has 4 cores. However newer hardware can get the same 4 core advantage by simply assigning 4 core affinity and since the newer hardware is more powerful, it will perform better apples to apples.

    3 minutes ago, Jugginator said:

     

    I guess mine is, too, lol. Stayed between 168-173 FPS everywhere outdoors. RX 480 8GB (with a -5% power limit) / R3 3200G (CPU wasn't even overclocked) running at 1080p. Sorry for the overlay I was too lazy to fix it and MSI afterburner was giving me issues.

     

    image.thumb.png.912214998a4112e1036d50c570140484.png

    case and point another 4 core cpu ;)

  6. 4 hours ago, Beelzybub said:

    This thread is so confusing to me. On the one hand, I would kill for a 3080 card. On the other hand,

    I get 120+ fps on lowest setting with my i5 3550 from 2012 and a GTX 1060 6Gig.

    I guess my computer is just magic.

    125fpslowsettings.jpg


    Read the post above yours and the magic behind the curtain is revealed XD

  7. yeah the 7dtd community are a great group, happy to welcome you aboard😜I hear that quite a lot, "my old hardware ran 7 days better" and they're not wrong, but once you apply the affinity tweak you're back on even ground and then the newer hardware is surprise surprise faster. I think it's because unity likes 4 cores and for a really long time intel 4 core i5's were the choice for gaming. Fast forward a couple years and 16 cores are available on consumer chips and unity just doesn't react well to having so many threads available to it with modern cpu's, especially with ryzen cpu's that have multiple dies, you want to keep all the process for 7dtd one 1 die for the lowest latency. Intel still benefits from the affinity tweak but not to the same degree.

     

    If you have the budget for it and are happy with it, then sure ryzen 5000 chips are the fastest gaming cpu's available today and 12 cores could be put to use in other games and certainly future games. I certainly understand wanting to get the most out of your system, especially with the prices of gpus that's been going on, gotta get every pennies worth out of it ;)

    But keep in mind even ryzen 5000 probably won't max out your gpu at 1440p, but you will be getting the most fps possible at 1440p (excluding overclocking and faster ram)

     

  8. 19 hours ago, McGee9899 said:

    Okay, so i assigned certain cores to the game and it helped but still only getting like a solid 55-60% gpu usage. Weird thing is that when i pause the game, the usage shoots up to 90-98%. Any other ideas?

     

     

    There are some other tweaks you can try, but they're more involved and yield not that much more performance. I've never been able to break 60% avg usage on my 3090 at 1440p. So it's important to understand at 1440p in the current alpha 100% utilisation isn't possible. As long as the performance your getting is enjoyable to you, then you don't need to doing anything more.


    1. Create a custom stripped windows 10 install iso 

    This can help a little with your 0.1% lows, but on a modern cpu they'd yield 5% better at best.

    2. CPU Overclocking
    This can maybe give you a couple % more gpu utilisation.

    3. New hardware
    swap your 3200mhz ram for 3600mhz or drop in a 5700x, that will give you around 20-30% more performance (just upgraded to the 5950x so i've not run it through my usual suite, but that's from initial testing to give you an idea). But i totally get that the 3700x isn't that old and 3200mhz ram is perfectly fine in most games, but it is an option. Upgrading to a 4k monitor as meganoth  mentioned would also see much greater usage of the gpu.

    I benchmarked many of these tweaks with my old 3950x, so it should give you an idea on what to expect and if it's worth it for you. Link

  9. 4 hours ago, faatal said:

    That is generally true, but not all the time. At 4k on a RTX 2070 with higher video settings I become GPU bound, so end up turning some video settings down. DLSS would potentially save some GPU, giving a higher FPS without having to turn down some settings.

    Agreed, it's not impossible to get gpu bound in 7dtd. However many players will run at 1440p and 1080p to get higher refresh rates over resolution and it's those common resolutions that are more likely to be cpu bound. 

     

    I've tested the

    3090(cpu bound at 4k)

    1080ti sli (cpu bound at 4k)

    Single 1080ti (cpu bound at 1440p)

    1080 (cpu bound at 1440p)

    980ti (cpu bound at 1080p)

    R9 295x2 (been a while can't recall) 

    1060 (been a while can't recall) 

    750ti (been a while can't recall)

     

    So while some older cards could benifit from dlss, they're to old to support the feature. Modern cards that do would need to be running at 4k to have a chance at any performance benifit and sometimes even then it will still be cpu bound at full Res. I've seen a couple cases of the 3080 reported similary as the 3090 which makes sence since they're not far off each other in terms of performance.

     

    I haven't tested lower in the 3000 stack or anything in the 2000 series, so I can't say for certain. But it's my opinion the cases where dlss would benifit people would be in the minority. I think some people would turn it on and play with slightly worst image quality, when it would be unesseseary in cpu scenarios. 

     

     

  10. 13 minutes ago, Blake_ said:

    I see. While I didn't mean to imply that Unity doesn't do the rendering to any or some extent, the change is tougher than I thought, AND a ton of work, as it means chopping a lot of already-done code to get ... cool brand-new script-managed graphics in C# ? that sounds like a punch in the gut. 

     

    BUT

     

    You guys (probably) will eventually make the decision to chop your built-in one nevertheless. Why?

     

    1-Because HDRP allows for a huge amount of graphics options for developers. 

    2-HDRP supports DLSS natively. And 7dtd needs all the performance it can get.

    3-HDRP supports Ray Tracing. 

     

     

    Downsides to HDRP are training for artists and programmers, which might take a few weeks. That's a big one.

    7dtd is a heavily cpu bound game, the extra performance of solutions like dlss come into play with the gpu not having to render the game at full Res. So it's benifit in 7dtd would be limited at best. Also dlss is only available as far as I know on 2000 and 3000 series gpus, so if you have a modern gpu you're not going to benifit from the performance uplift of having the gpu do less work, since mid to high range gpus don't even get full utilisation at full res as it is, due to being cpu bound. 

  11. Yes 30-40% means 2-3 cores at 100% 7dtd will usually max out 1 or 2 cores and that's where the cpu bottleneck comes from, even tho you still have 60-70% Total cpu utilisation unused. Happy to help😃

  12. It is 100% CPU bound. Modern high end GPU's will always be CPU bottlenecked , regardless of CPU Model especially at 1440p. However you can increase cpu performance several ways.


    1. Assign 7dtd 4 true cores

    This seems to be quirk of unity in general, It scales in performance up to 4 cores, then the more extra cores it has, you get an exponential loss of performance. You want to assign "True Cores"  not virtual cores (Hyperthreaded cores or smt for amd) your real cores are always (at least in windows) going to start with core 0 then a virtual thread (core 1) then a real one (core 3) etc so forth. There are numerous ways to do this, 1 is task manager but it won't work running eac and you have to apply it every launch. The easier way of doing it is to set up a basic shortcut to launch the game with the desired affinity. I have 1 already set up if you want to just use that i'll link it here it's setup with the default install path, so if yours is installed somewhere else you'll need to edit it to point to the 7dtd exe. there are 2 shortcuts (1 for eac and 1 for without eac) or if you'd rather set it up yourself there is a guide here This tweak will give a quick and immediate boost to cpu performance, but i can't promise it will be enough to get 100% utilization at 1440p.

    2.Make sure in your motherboards bios your rams xmp profile is enabled
    Ryzen love fast memory and if it's running at stock, your cpu's performance will suffer greatly.

     

    3. Close or assign affinity to background programs

    if it doesn't need to be running, close it. If you want to have it running however. You can use the 1st method mentioned but assign the background programs the other 4 cores on your cpu. Doing this ensures your background programs will have much less of a impact on 7dtd performance when running on different cores. However it's still not as good as not having them running at all, so make sure your only running what you need.

     

    4. adjust your video options
    Some options are very cpu heavy. Disable screen space reflections (checkbox on the bottom right) change reflection quality & shadows to low, disable reflective shadows. Set tree quality to medium or high. The other checkbox items will also help a little on the cpu godrays etc but don't make as big a difference as the other listed, so up to you on those.

  13. 36 minutes ago, The_Great_Sephiroth said:

    I suggest you spend at least five seconds looking at the results I linked. You cannot override this. In fact, while playing Unity games, my monitor is in 144Hz mode. The game only allows 60fps due to internal timing. Again, if you want to understand it and not simply beg others to do it for you, read Unity's manuals. Also, if you read the replies on the code you linked, you'd see it still locks you to 60fps when built. You're not even reading the entire post. You read a single doc and ended it.

     

    Since you're too lazy to literally just read some of what I linked, here is a very relevant result.

     

    Even after disabling v-sync and using the code you linked, capped at 60fps

     

    Oh, and here's an answer literally on the Unity forum explaining that it won't go above 60fps.

     

    Locked at 60

     

    Finally, if you own ANY Unity game it is easy to see this. Simply run with v-sync on so you see a true framerate and use Steam or your favorite FPS counter and you'll see that despite owning eight 3090's in SLI you cannot break 60fps without breaking things. There was a thread on one of the Unity forums about this and the devs literally said that timing gets crazy above 60fps internally. This is why you can peg out at 60fps but not get higher. Disabling vertical sync can produce insane framerates (I get almost 1000fps in Ark with v-sync disabled, on my 144Hz monitor) but due to the lack of sync with the monitor, many of those buffers may be a single pixel instead of the millions which make up our scene.

     

    In addition to the list I mentioned earlier other Unity games I own which also lock to 60fps max even while the monitor is at 144Hz include The Long Dark, Firewatch, Guns of Icarus, Jazzpunk, Kona, Slender: The Arrival, Thief Simulator, and Undertale. I am sure I own more Unity titles, but explain why every single Unity title I have ever owned, across a wide array of systems, always pumps 60fps and no more. From Pentium D960's with 4GB of RAM to this i7-6950X with 128GB of RAM, Unity has NEVER gone above 60fps and there IS a reason. I just don't feel like spending my Saturday digging through Unity forums to find the answer. Do your own homework.

    Oh i did, neither of your linked "sources" are from the unity docs . They're troubleshooting form posts from unity devs and if you read the posts as you suggest, they don't support your fantasy either, everyone in those seems to think it's not normal and to contact unity support. No one else there seems to have the same problem, just as no one else with a high refresh rate monitor is having your experience. 

    I can run 7DTD, Subnautica, Subnautica below zero, all with gync and vsync on with well over 60fps.
     

    Spoiler

    1441891393_Screenshot2021-07-17194654.thumb.png.025cd33f10388e73423e225a1fbc463b.png

     

    What your experiencing isn't normal and getting mad at someone trying to help you correct it isn't helpful either. I already have done my homework , i've benchmarked this game for hundreds of hours across multiple alphas and unity version, with various settings on half a dozen different systems. The only time I've ever had the game locked to 60 with vsync in all that time has been a setting misconfiguration. Don't believe me if you wish, keep playing at 60, but i'm certainly not locking my game 60. 

    This topic has gone off topic too much as it is, so this is the last i'll say, enjoy the game have a good day.

  14. 39 minutes ago, The_Great_Sephiroth said:

    Have fun, I spent an entire day reading this mess about a year ago. All I can tell you is that Unreal Engine > Unity. There's plenty of easy things to check though. Look around. See any refresh rate setting in Unity? No? Me either. Here's a boat-load of threads on this very thing, and after reading enough you will find one which links to Unity Engine documentation (I am NOT spending another day looking for it) which explains why timing in their engine is based on 60Hz/60fps. This is a Unity thing, NOT a 7 Days thing.

     

    Have fun!

    so no source😆 what you're describing isn't a unity problem, it's a setting problem. GPU's don't render partial frames (Unless it's rendering with SFR in an SLI Setup but never independently rendering partial frames) What actually happening is your monitor with vsync off is displaying multiple COMPLETE frames. This is called screen tearing and happens when your GPU is rendering more frames than your monitor can display, or fewer frames than your monitors refresh rate. The reason your locked to 60fps in games with vsync on ,is because you've not noticed that your monitor is running at 60hz. Yes 60hz+ monitors don't always run at the hz on the side of the box. This happens sometimes when you have a driver update, plug/replug the display cable, change primary monitor etc so forth.

    You can once again enjoy 60+ fps gaming with gsync and vsync enabled, by opening nvidia control panel > under display > Change resolution > Refresh rate drop down
     

    Spoiler

    1468324007_Screenshot2021-07-17184121.png.df06f9ffa5cd36ba9a22987fa1f9b17d.png

     

    The first unity docs result in your suggested google search on the topic.

    Additionally if the QualitySettings.vSyncCount property is set, the targetFrameRate will be ignored and instead the game will use the vSyncCount and the platform's default render rate to determine the target frame rate. For example, if the platform's default render rate is 60 frames per second and vSyncCount is set to 2, the game will target 30 frames per second.
    using UnityEngine;
    
    public class Example
    {
        void Start()
        {
            // Make the game run as fast as possible
            Application.targetFrameRate = 300;
        }
    }

     

     

  15. 1 hour ago, The_Great_Sephiroth said:

    Doughphunghus, 7 Days cannot attain more than 60fps. Unity is a very dated engine and internally things go haywire above 60fps. The only way to fool your FPS counter into thinking you are above 60fps is to disable vertical sync, which means you get a LOT of incomplete buffers which count as entire frames, despite not being so. I have a 144Hz G-Sync monitor and a 2080 Ti. I stayed pegged at 60fps the entire time I am playing 7 Days, and I play in 1440p. I leave vertical sync on so I don't get tearing and so G-Sync works. Always capped at 60fps in Unity games though. Subnautica, Subnautica BZ, 7 Days, My Summer Car, The Forest, Raft, LoE, Stranded Deep, and others all lock you to 60 real FPS max. The good news is that my 2080 Ti never warms up while playing those titles!

    That's interesting, I've never heard anything like that before. Can you link your source, wouldn't mind a read? 

  16. 6 hours ago, doughphunghus said:

    This may sound cheesy but....

    I had a lot of 'old junk" that was more for tinkering than useful.  Basically I feel better knowing I'm learning on something that's basically trash vs. spending money tearing something new up.  And I had some stuff that was old 'hobbies" I was never going to get back into but I couldn't throw away because "it was still good stuff...hardly used!"

     

    Anyway: My solution, should anyone want some ideas:

    1. Give away/donate what you can to people who might actually want it (not just dump it on the salvation army). Of course sell what you can if its really, truly high dollar.

     

    2. Take pictures of your junk..wait...not... that's not what I mean. Literally just take pictures and put them in "the picture collection" whatever you use.  Take as much as you want, obviously. You'll likely never look at them again, but somehow it "stores" them in your brain as "not completely gone" and you can still show people the stuff if it ever comes up... so its easier to throw out the physical stuff.

     

    3. For everything else you can't throw out (sentimental junk): either display it somehow as "decor" or carefully pack it into a box and label the box with the date you packed it and contents. put it in a closet or something. If you don't open it, when/if you move homes you can choose to take it or not. Eventually it will seem silly to keep it. Or your house will burn down with the box in it.

     

    Then: After you're done, treat yourself and buy something new, like a system that can play 7D2D at full graphics settings at 60+ FPS :). Even get a new keyboard, monitor, the works...Not trying to be a jerk, just saying "you deserve this!"

    I always sell anything I'm not using, for pc hardware I have pictures like you said, but I also keep the benchmarks if I ever wanted to compare. Older hardware usually doesn't have much value, although some older hardware is worth more than it's original cost decades later. I always try to sell as soon as I no longer use it, to get the most resale value from it. But some things are just unsellable, I've had books and dvds listed for years for pennies, no one wants them XD

  17. I've wanted to do a large test like this for a while to see how each tweak or overclocked component contributes to the final "Maximum Performance" results. Some interesting and surprising results all said and told. The way i've tested everything is as follows. All results have a "Max Performance" table always on top. Below that are the results from disabling or removing 1 individual tweak or overclocked component, to see what impact they have compared to running all tweaks. On the left is an overall difference table that adds up all the results from a data category like "average framerate" while doing the same for the other table and then comparing the difference.

     

    The 1 exception is the custom windows ISO install, this is just my usual benching methods without some of the rest of the tweaks, It's also done in A19.0 instead of 19.5 since i started these tests after i had already wiped the vanilla windows 10 install. I could have reinstalled vanilla windows 10 and retested and simply wiped again and installed the custom image. However i can't remember exactly everything i had installed and how every single thing was configured. So since i wouldn't be able to have them set up identically program and configuration wise anyway, i've opted to save 16 hours and do pretty much the same comparison but in a19.0.

     

    My Other 7DTD Benchmarks

     

     

    Benchmark Notes & Disclaimers

     

     

    Spoiler

    1. These figures should be taken as ball park figures and not absolute values.
    7DTD is a very difficult game to benchmark accurately. It's in alpha and sometimes does weird things. Also some things are difficult to account for such as the AI, they will behave differently and spawn differently every run, which leads to #2.

    2. What's controlled for and what's not
    I haven't controlled for driver versions & windows versions. My goal was just to get "close enough" results instead of 100% accurate as possible. That said i have controlled for memory leaks, restarting the game after every run. Time of day is reset from the console after every run. When changing resolution i changed ingame before shutting down then validated with the unity screen selector on next launch. No zombies/animals where killed so areas weren't "cleared" until x amount of days, to insure that on following runs they would respawn (Although i can't do much if zombies kill animals or vise versa etc). I only did 1 pass on each test, however any result that didn't look right or didn't make sense was discarded and retested.

     

    3. 1 system isn't enough to draw definitive conclusions for every configuration. The conclusions found here may or may not apply to your own systems, these results are only really comparable if you have similar hardware. But they will however give an idea of where the current performance is at with the hardware that was tested.

     

    4. It's Alpha

    Any update could change any conclusions drawn from these tests. Also as a work in progress things are always improving and getting worse, by the time 7DTD goes gold these results with be obsolete and invalid. Things are always changing in alpha.

     

    5. Console Options used
    all tests are run with increased view distance (sg optionsgfxviewdistance 12). 

    6. Tests on different resolutions were done on monitors native aspect ratio
    16:9 Resolutions were tested on an Asus PB287Q 4K 3840x2160 60HZ Monitor
    21:9 Resolution was tested on an LG 38GL950G 3840x1600 175Hz Monitor

     

    7. The benchmark Run
    I've been using the same run since A14 for all tests, you can find more details on the exact run in my a15 benchmarks i did ages ago Here

     

    System Tested

     

    Trident

    CPU AMD R9 3950x
    Motherboard Gigabyte X570 Aorus Xtreme
    Ram G.SKILL Trident Z Neo 64GB (4 x 16GB) DDR4 3600Mhz CL16-19-19-39
    Storage 1 Save Data
    1TB Sabrent Rocket NVMe PCIe 4 M.2 SSD
    Storage 2 Game Data 2TB Sabrent Rocket NVMe PCIe 4 M.2 SSD
    GPU MSI Suprim X RTX 3090 24GB

     

    System Overclock Notes

     

    [Gaming Profile V2]

    CPU: Vcore: 1.45V
    CCD 0: CCX 0: 45.50 
    CCD 0: CCX 1: 44.50 
    CCD1 DISABLED

    SMT: Disabled

    Ram: Stock: Default JEDEC(2133Mhz) , Overclock: XMP Enabled (3600Mhz)    
    GPU Core:+160 Mem:+550      
    GPU Bios Flashed with a 500 Watt Power Target   

     

    Tested With 2 Video Settings

     

    "Ultra Settings 1"
    809605237_a19ultrasettings1.thumb.png.fe850333ed4d9b526d3a7e150569aa09.png
    "Ultra Settings 2"1147888752_ultrasettings2.thumb.png.e2da3c45a09a0cd6cdce4ad6c4e069cd.png "Lower Settings 1"1505245770_lowersettings1.thumb.png.0a2a2d68375cdb6aaf112e8b1b4d60ca.png "Lower Settings 2"1147888752_ultrasettings2.thumb.png.e2da3c45a09a0cd6cdce4ad6c4e069cd.png

     

    Full Stock VS All Tweaks

    The first results are comparing a full stock config to running all the tested tweaks. As usual the original excel files are available Here. The other tested tweaks are as follows:
    - Custom Windows ISO (A19.0)
    - Assigning Affinity (Limiting the game to use only certain CPU Cores) (A19.5)
    - Having Background Programs Open (A19.5)
    - CPU Overclock Disabled (A19.5)
    - GPU Overclock Disabled (A19.5)
    - Setting the GPU to prefer max performance (A19.5)
    - Ram OC Disabled (A19.5)


    A19.5 Ultra Settings

    PE0vUqg.png
    A19.5 Lower Settings

    LEm345d.png

     

    Conclusions

    With Ultra setting we see an uplift of over +130% for the average and +100% for the 0.1% lows , Lower setting saw nearly 90% and 60%. This is quite literally double performance, no shelling out for new hardware, no game optimisations , no cheating lowering settings or resolution. Anyone can apply these tweaks, most of them are really simple and take very little time to apply yourself. However it's not all roses, some tweaks as you'll see do very little. However i've included them as they only apply if you have similar hardware. Depending on what exact hardware you have they might make no difference here, but could help substantially for your system. I'll go into more detail on these cases in each of the tweaks conclusions. For now this demonstrates spending some time tweaking and messing around is definitely well worth it. I'd go as far as say if your fortunate enough to land yourself some current gen high end hardware with the current shortages and scalping, you owe it to yourself to spend some time so you get the most out of your system.

     

    Benchmarks

     

    Spoiler

    Custom Windows ISO

    A19.0 Ultra Settings

    GSwOZ3H.png

     

    A19.0 Lower Settings

    5nMoJ4Y.png

     

    Conclusions

    At first glance we don't see much change at all here. Best case overclocked is up to +5% 0.1% lows, however looking at the all core stock results 0.1% lows are nearly +30% this would definitely be worth it, that's nearly as good as the changes we saw going from a19.0 to a19.5. The reason it helps at stock is this tweak will increase CPU performance in windows which is always running in the background (Unless you're running Linux) It can do this when you create a custom install image for windows 10, by removing windows features you don't use or want (looking at you windows telemetry and cortana, toodles! XD) When things are configured optimally + an overclock it doesn't help much in this case. The 3950x is a relatively recent and powerful CPU. However if you're running a older chip this could drastically improve your experience. If you want to find out more about how to do this yourself i'll leave a link to a video tutorial. In that guide you can either use his preconfigured ISO or skip half way and you can make your own based on your needs. I'd recommend you make your own since other pre-made ones may cut features you want to use.

     

    Assigning CPU Affinity 

    Spoiler

    A19.5 Ultra Settings

    z8sEmCn.png

    A19.5 Lower Settings

    HaRB1qJ.png

     

     

    Conclusions

    +5-15% is a nice bump but not all that amazing or is it? The 3950x has 16 core 32 threads, in this test virtual threads were disabled and half the cores disabled. So it was configures as 8 cores 8 threads. 7DTD like 4 true cores nothing more nothing less, i've covered this in my first a19 results. The more threads and cores you have the worse 7DTD will perform, so if you have more than 4 threads (preferably true cores not virtual ones) this tweak is pretty much mandatory, in my other benchmarks previously discussed we did test stock and overclocked with all 32 threads available to 7DTD and the performance improvements going from 32 threads to 8 alone were huge. This was done in the bios limiting the cores and disabling virtual threads, however you don't need to ever set foot in the bios to apply this tweak. It can be done with simple windows shortcuts, works with and without EAC. If you want to make your own here is a guide or you can use mine:
    For 4+ core CPU's WITH virtual threads (AMD: SMT Intel: HT) - Download

    For 4+ core CPU's WITHOUT virtual threads (AMD: SMT Intel: HT) - Download

     

    Background Programs Open Benchmarks

     

    Spoiler

    A19.5 Ultra Settings

    T8QjuGf.png

    A19.5 Lower Settings

    07t9PR5.png

     

     

    Conclusions

    The background apps in this case were:

    -Password Manager App
    -Corsairs ICUE
    -Logitechs Gaming Hub
    -Hardware Info 64

    - + About a dozen windows services

     

    It's also worth noting that these apps were also set with affinity on the other 4 cores not used by 7dtd. Doing this isn't as good as not having them running at all since they will still share other aspects of the CPU like the Cache. However these are apps i normal always have running, so i used the same method as discussed on the previous benchmarks to assign them cores not used by the main 4 threads to see if it would limit the performance impact of having them open. While having them open even with affinity still has an impact as we can see, it does show that if you must have background apps open, it's worth assigning them to other cores if you have extra cores to spare. Windows has a "startup folder" that open all shortcuts within on startup. So you would just create your shortcuts to use cores not used by 7dtd and dump them in that folder and they will open on startup with affinity without you having to do anything. The only annoyance i've found is opening them this way, opens them into the main window instead of opening in the system tray like a normal "run on startup"

     

    CPU Overclock Benchmarks

     

    Spoiler

    A19.5 Ultra Settings

    CEYhcg3.png

    A19.5 Lower Settings

    sFutrcf.png

     

     

    Conclusions

    Like Affinity this is another improvement but it's important to note, these are results you can expect from modern overclocked CPU's. Modern CPU's are already running pretty close to there maximum potential out of the factory. If you have an older CPU however this is a different story, i have a old i5 4670k base clock is 3.4Ghz and boost 3.8Ghz what that means is 95% of the time it will run at 3.4 only lightly threaded workloads will see the boost clock and only for a short time under stock conditions. However this chip can hit 4.6ghz overclocked on all cores, that's a huge difference. TLDR recent CPUS Look at the above results and decide if it's worth it, Older CPU's Crank that clock ;) Overclocking will be done differently based on your CPU and motherboards bios, if you google guides on your exact chip and board you should be able to find what you need.

     

    GPU Overclock Benchmarks

    Spoiler

    A19.5 Ultra Settings

    1bsJYko.png

    A19.5 Lower Settings

    jdswKl4.png

     

     

    Conclusions

    Here is our first case of "makes no difference" However i wouldn't discount it, in this case the 3090 is too overpowered even at stock for the 3950x to keep it fed, even under ideal conditions. If you're running an older GPU or you often see GPU usage over 90-95% then an overclock is something that can improve performance in that case. For me however I'm going to run it at stock and stop wasting all that electricity XD There are different tools used to overclock the GPU google is your friend here ;)

     

     Setting The GPU To Prefer Max Performance Benchmarks

     

    Spoiler

    A19.5 Ultra Settings

    4PvCbei.png

    A19.5 Lower Settings

    Rz5dbVd.png

     

    Conclusions

    Yet another case of no improvements and yet another case of "However" If your system is severely CPU Bottlenecked and your GPU is always jumping up and down in clockspeed to adjust to the load, it's worth giving it a try. Normally i wouldn't recommend doing this as it would burn electricity for no benefit, but if you want to try it you can enable in in your GPU's control panel. For Nvidia it's Manage 3D Settings > Program Settings > Add (Find or browse to 7DTD's exe) > Power Management

     

     

    Ram Overclock Benchmarks

     

    Spoiler

    A19.5 Ultra Settings

    f6HtH4m.png

    A19.5 Lower Settings

    9K0pG1Q.png

     

    Conclusions

    Now we see a very nice improvement +15-25% across the board. The keen eyed among you may notice "my overclock" is actually enabling the ram's XMP profile not applying a custom OC, this is because i wasn't able to push my ram any more than the XMP profile speeds since i'm populating all 4 of my motherboards slots, XMP is the best i can do here and since XMP is outside of official spec it's technically an overclock ;). This is also another case of "your mileage may vary" AMD Ryzen chips benefit massively from fast ram up to a point. They have to be kept 1:1 with the CPU's infinity fabric bus speed, meaning for 3000 series 1:1 ram speeds max out at about 3777MHZ. It's possible to go higher, but then it's not 1:1 and you get a performance penalty, so not worth it IMO. Intel CPU's on the other hand don't benefit as much from faster ram, but can support higher speed ram without worrying about 1:1 issues (11th gen does have Gear Ratios which are the same idea)

     

    Now the good and bad news about applying this yourself. Good news if your sticks have an XMP Profile, it's 1 setting you enable in your bios and it's enabled. Now the bad news part, if your system is a prebuilt that doesn't have ram with XMP. It may still be possible to overclock but if it's your standard dell affair i wouldn't count on it. More bad news if you can overclock, ram overclocking is much more complicated than cpu or gpu overclocks, there are dozens of timing and voltages to adjust and it's a testing nightmare. So make sure XMP is enabled, if your sticks don't have it and your on Intel i wouldn't worry about it. If on AMD Ryzen your options are buying new sticks with XMP profiles or attempting a manual oc, again google will be your best friend here.

     

     

  18. Got a bit lax on tracking the performance of the last few updates. All caught up now, we have comparative benchmarks from A19.2 up to the current A19.5. To save time i just tested 1 system "Hotbox" since it isn't as powerful as my main rig it should in theory benefit the most from optimisations.

     

    My Other 7DTD Benchmarks
     

     

     

    Systems Tested

    Spoiler

     

    Hotbox

    CPU  Intel I7 6700K 4 Core 8 Threads
    Motherboard ASUS STRIX Z270I Mini ITX Motherboard
    Ram Corsair Vengeance LPX 32gb (2x16gb) DDR4 3000 MHz
    Storage Samsung SM951 512Gb M.2 PCI 3.0 SSD (AHCI Model)
    GPU Evga Sc2 Gaming Black Edition GTX 1080ti

     

     

    Systems Overclock Notes

    Spoiler

     

    Hotbox

    CPU: 4.5Ghz, 1.3V, Cache ratio 43

    Ram: Stock: Default JEDEC(2133Mhz) , Overclock: XMP Enabled (3000Mhz)

    GPU: Core:+50 Mem:+325

     

     

    Benchmark Notes & Disclaimers

    Spoiler

     

    1. These figures should be taken as ball park figures and not absolute values.
    7DTD is a very difficult game to benchmark accurately. It's in alpha and sometimes does weird things. Also some things are difficult to account for such as the AI, they will behave differently and spawn differently every run, which leads to #2.

    2. What's controlled for and what's not
    I haven't controlled for driver versions & windows versions. My goal was just to get "close enough" results instead of 100% accurate as possible. That said i have controlled for memory leaks, restarting the game after every run. Time of day is reset from the console after every run. When changing resolution i changed ingame before shutting down then validated with the unity screen selector on next launch. No zombies/animals where killed so areas weren't "cleared" until x amount of days, to insure that on following runs they would respawn (Although i can't do much if zombies kill animals or vise versa etc). I only did 1 pass on each test, however any result that didn't look right or didn't make sense was discarded and retested.

     

    3. 1 system isn't enough to draw definitive conclusions for every configuration. The conclusions found here may or may not apply to your own systems, these results are only really comparable if you have similar hardware. But they will however give an idea of where the current performance is at with the hardware that was tested.

     

    4. It's Alpha

    Any update could change any conclusions drawn from these tests. Also as a work in progress things are always improving and getting worse, by the time 7DTD goes gold these results with be obsolete and invalid. Things are always changing in alpha.

     

    5. Console Options used
    all tests are run with increased view distance (sg optionsgfxviewdistance 12). 

    6. Tests on different resolutions were done on monitors native aspect ratio
    16:9 Resolutions were tested on an Asus PB287Q 4K 3840x2160 60HZ Monitor
    21:9 Resolution was tested on an LG 38GL950G 3840x1600 175Hz Monitor

     

    7. The benchmark Run
    I've been using the same run since A14 for all tests, you can find more details on the exact run in my a15 benchmarks i did ages ago Here

     

    8.For Hotbox i run the game without the affinity tweak. I may move to running with it in the future tho.

     

     

     

    2 Different video Settings Tested

    Spoiler

     

    Ultra Settings 1 Ultra Settings 2 Lower Settings 1 Lower Settings 2
    a19_ultra_settings_1.thumb.png.0df3f5e62c068afe7bf9af894623443b.png 7_Days_to_Die_Screenshot_2020_09.21_-_15_57_29_22.thumb.png.ee757ad5e94777d1cec8f4efe8126f9d.png lower_settings_1.thumb.png.4d3b94e766d01d2f5f364caf9eb51eea.png 7_Days_to_Die_Screenshot_2020_09.21_-_15_57_29_22.thumb.png.ee757ad5e94777d1cec8f4efe8126f9d.png

     


    Hotbox Benchmarks

    Original Excel Files Download

     

    A19.2 VS A19.3 Benchmarks


     

    Spoiler

     

    Ultra Settings 

    spacer.png

     

    Lower Settings

    spacer.png

     

     

    A19.3 VS A19.4 Benchmarks


     

    Spoiler

     

    Ultra Settings 

    spacer.png

     

    Lower Settings

    spacer.png

     

     

    A19.4 VS A19.5 Benchmarks


     

    Spoiler

     

    Ultra Settings 

    spacer.png

     

    Lower Settings

    spacer.png

     

     

    A19.0 VS A19.5 Benchmarks


     

    Spoiler

     

    Ultra Settings 

    spacer.png

     

    Lower Settings

    spacer.png

     

     

     

    Conclusion

    The first few .xyz updates nothing much changed. However 19.5 gave us significant improvements and if we look at A19.0 to A19.5 we see huge gains in the 1% and 0.1%. The gains here up to nearly +30% is the equivalent to 2-3 generations of CPU single core performance, which considering this extra performance is not only free for everyone, it also requires the user to do nothing but let steam auto update.

    There wasn't as much an improvement in average frame rate but we still got a good up to 10% uplift. 10% doesn't sound like much but it can be the difference between unplayable and playable for some people. It's also important to note that the 1% and 0.1% numbers are arguably more important. These numbers represent how "smooth" the framerate is. Either a wide margin between those numbers and the average framerate or sub 30fps lows can show how often there is significant "stutter" or "micro pauses" that give a game an undesirable experience. Think if the framerate is consistently dropping to 5fps even if just for half a second, but your average is over 100 fps it would still feel unplayable.

     

    So in summery the pimps have done a great job here, if they can keep adding gains here and there in a20 and above, the game will run butter smooth for most people in no time. This is a promising glimpse into A20's performance also. It's not the end of the road for optimisations yet, but this is a good sign to come.🤘

  19. It's a driver feature, the reason it's limited to certain titles currently is Nvidia only enables the feature on whitelisted titles, because it can cause worse performance in some games. So Nvidia only add games they've tested and found it benifits performance in them. 

     

    However you can enable it for any game using profile inspector, the same tool used to enable features like sli in games that don't officially support it. 

     

    https://www.google.com/amp/s/wccftech.com/heres-how-you-can-enable-resizable-bar-support-in-any-game-via-nvidia-inspector/amp/

     

    I suspect the reason it doesn't seem to do anything in 7dtd is 7dtd uses very small textures. Most of the textures in the game are 2d and have file sizes of a couple hundred KB. Even high quality textures like the draw bridge are only about 13MB. So the current 256MB windows the cpu sees the frame buffer in is large enough already. 

  20. Resizable bar is a recent performance feature, available with current generation hardware first with AMD and then Nvidia and Intel. Up until now the CPU accessed the GPUs memory in 256MB chunks. Resizable bar allows the CPU to access the entire GPU's frame buffer at once, potentially improving performance. Titles that currently have this feature enabled can get a modest, up to* +12% uplift. You need 3 specific pieces of hardware to enable it. Also if you have a supported GPU it may need a firmware update as well as a motherboard bios update.


    Supported Hardware

    Spoiler

     

    1. Supported CPU 2. Supported Motherboard 3. Supported GPUs
    AMD: Ryzen 3000 or 5000 AMD Motherboard chipset: 500 series or 400 series AMD: RX 6000 Series
    Intel: 10th or 11th Gen Z490, H410, H470, B460 and others that support 11th gen CPUs Nvidia: RTX 3000 series

     

     

    My Other 7DTD Benchmarks

     

     

    Currently the feature will only be enabled in select titles because enabling it in some games can make performance worse. However it's possible to enable it for any game using Nvidia Profile Inspector, which is what I've done here.

     

    spacer.png

     

    Benchmark Notes & Disclaimers

     

    Spoiler

     

    1. These figures should be taken as ball park figures and not absolute values.
    7DTD is a very difficult game to benchmark accurately. It's in alpha and sometimes does weird things. Also some things are difficult to account for such as the AI, they will behave differently and spawn differently every run, which leads to #2.

    2. What's controlled for and what's not
    I haven't controlled for driver versions & windows versions. My goal was just to get "close enough" results instead of 100% accurate as possible. That said i have controlled for memory leaks, restarting the game after every run. Time of day is reset from the console after every run. When changing resolution i changed ingame before shutting down then validated with the unity screen selector on next launch. Background programs were running for the tests but assigned to cores not used by 7DTD. No zombies/animals where killed so areas weren't "cleared" until x amount of days, to insure that on following runs they would respawn (Although i can't do much if zombies kill animals or vise versa etc). I only did 1 pass on each test, however any result that didn't look right or didn't make sense was discarded and retested.

     

    3. 1 system isn't enough to draw definitive conclusions for every configuration. The conclusions found here may or may not apply to your own systems, these results are only really comparable if you have similar hardware. But they will however give an idea of where the current performance is at with the hardware that was tested.

     

    4. It's Alpha

    Any update could change any conclusions drawn from these tests. Also as a work in progress things are always improving and getting worse, by the time 7DTD goes gold these results with be obsolete and invalid. Things are always changing in alpha.

     

    5. Console Options used
    all tests are run with increased view distance (sg optionsgfxviewdistance 12). 

    6. Tests on different resolutions were done on monitors native aspect ratio
    16:9 Resolutions were tested on an Asus PB287Q 4K 3840x2160 60HZ Monitor
    21:9 Resolution was tested on an LG 38GL950G 3840x1600 175Hz Monitor

     

    7. The benchmark Run
    I've been using the same run since A14 for all tests, you can find more details on the exact run in my a15 benchmarks i did ages ago Here

     

     

    System Tested

     

    Spoiler

     

    Trident

    CPU AMD R9 3950x
    Motherboard Gigabyte X570 Aorus Xtreme
    Ram
    G.SKILL Trident Z Neo 64GB (4 x 16GB) DDR4 3600Mhz CL16-19-19-39
    Storage 1 Save Data 1TB Sabrent Rocket NVMe PCIe 4 M.2 SSD
    Storage 2 Game Data 2TB Sabrent Rocket NVMe PCIe 4 M.2 SSD
    GPU MSI Suprim X RTX 3090 24GB (With EVGA Firmware)

     

     

     

     

    System Overclock Notes

    Spoiler

     

    [Gaming Profile V2]

    CPU: Vcore: 1.45V
    CCD 0: CCX 0: 45.50 
    CCD 0: CCX 1: 44.50 
    CCD1 DISABLED

    SMT: Disabled

    Ram: Stock: Default JEDEC(2133Mhz) , Overclock: XMP Enabled (3600Mhz)    
    GPU Core:+160 Mem:+550      
    GPU Bios Flashed with a 500 Watt Power Target

     

     

    Tested With 2 Video Settings

     

    Spoiler

     

    Ultra Settings 1

    spacer.png

    Ultra Settings 2

    spacer.png

    Lower Settings 1

    spacer.png

    Lower Settings 2

    spacer.png

     

     

    Trident Benchmarks

    I just did 1 profile for these tests to save time using G Profile V2. Custom profiles for games usually get reset to default after a driver update which indeed happened in the middle of the tests, but the RSB enabled profile was re-applied after the update and before further tests. So RSB should be enabled for all tests here. I also did the tests in A19.0 just because i already had the numbers with RSB disabled, cuz i'm lazy 😛 But i did do a 4k test in A19.5 for both settings just to be sure there hasn't been a recent addition that might benefit RSB.

     

    Original Benchmark Excel Files: Download

     

    A19.0 B180 Ultra Settings

    Spoiler

    spacer.png

     

    A19.0 B180 Lower Settings

    Spoiler

    spacer.png

     

    Conclusions

    So that was not great. Overall it's within run to run variance. However it's consistently an improvement, so that does suggest it does indeed make things better overall. Anything bellow 1% is almost certainly margin of error, so if it does actually provide a boost in 7dtd it's bellow 1%. I checked and re-checked to make sure that both RSB was enabled at a hardware level and the required bits in profile inspector were applied and indeed they were. So this means 1 of 2 things, 1. There is some other factor in play that does not enable Resizable bar in 7dtd or 2. Resizable bar just won't have any meaningful benefit in 7dtd. I even tried setting a custom resolution on my 4k monitor to run 7DTD at 8k to put more strain on the GPU's frame buffer. Even then consuming 16 and half gigs of vram there wasn't really any difference.

    If you want to try it yourself i'll leave the profile HERE but please run some before and after benchmarks to see if makes any difference on your machine, since the tests I've done show no real difference.

  21. 13 minutes ago, camsterdude said:

    i agree with you, as the person that made this post in the first place i have already stated that the problem was fixed ages ago with just turning off 2 in alpha settings. I also upgraded to a 5950x and yes i do get a @%$# ton more fps than before so it was a per core bottleneck which i did infact already knew but those 2 settings actually made it a whole lot worse. i could after turning them off play all other settings maxed at well over 150 fps.

    Are you talking about the ssao and reflection options? Or console command options like poi's?

  22. On 4/14/2021 at 9:08 AM, drperry said:

    Could there be a possible conflict with the hardware scheduling of AMD graphics cards vs. the software scheduling of NVIDIA cards?

     

    My 3900X used to run 70 - 90% CPU usage with my old R9 Fury or 7970x2 Crossfire setup and max GPU usage. That was without using any core affinity options.

     

    Now that I have a 3090 I max out at 25% CPU usage and 30% GPU usage. the R9 actually got better FPS at 1080P when using the same settings, compared to the 3090, until getting to settings that exceeded the VRAM of the older cards.

     

    1440P is the same FPS with the same settings between the R9 and 3090, until you get into the high and ultra settings.

    Ryzen 3000 cpus won't ever be fast enough in normal situations to max out a 3090 in 7dtd. If you follow my previous discused "4 core affinity" advice, you can get a much more desirable 60-80% gpu usage at 1440p. 

     

    Also 7dtd doesn't scale past cpu 4 cores, so a 12 core 24 thread cpu will never see 70%-90% usage in game, regardless of what gpu is installed. You must of had something running in the background like a steam game update or Windows update. 

×
×
  • Create New...