Lame products & lazy engineering

Admin

Administrator
Moderator
Messages
2,915
#1
One big issue i have noticed with technology is general is that companies are not really pushing for excellence, instead they are just trying to be a bit better than the competition for most people and that's about it. There is no real push for greatness anymore.

Often new products are stripped of features. One particularly egregious example of this is apple and samsung removing headphone jacks from their phones.

Sometimes features/performance are intentionally withheld from a cheap products to for people to buy a more expensive variant instead.
 

Admin

Administrator
Moderator
Messages
2,915
#2
Nvidia GPUs are made not to be used fully
If you actually fully load an nvidia GPU (such as via furmark) it will throttle really badly. The big issue here is that the voltage of the card is set not to drop below 0.712 (for the tuf 3090 oc) which is really bad when the power of the card needs to drop further to remain below the power limit. You end up with lowered frequency but the same voltage which is very far from ideal.


I was able to replicate that issue with the tuf 3090 oc by using the "extreme burn-in" option. With 375W power-limit it was very far away from the 1600mhz i overclocked it to at 0.712v (dropping down to 1100mhz which is well below the stock base clock).

Extended.PNG

Some people have claimed that 0.712 would be some minimum for the architecture but that is extremely unlikely since at that voltage i was able to increase the frequency by 300mhz (20% overclock) showing that the voltage applies at stock is a lot higher than needed. Most likely this issue could have been resolved just by allowing lower minimum voltage (likely specified in the VBIOS). Unfortunately even when you extent the voltage curve in msi afterburner (picture above) it will not actually be followed by the card.

While this is a real issue it does not mean that nvidia GPUs are lame, the issue seems to have been that they had to go with the samsung 8 process which resulted in power-draw a lot higher than ideal even at low voltages (such as 0.718v).

You can still prevent down-clocking simply by increasing the power-budget a lot (if the VRM, PSU and cooler can handle it) but that might require modding the bios which will void the warranty (depending on what model you got). You would need over 550W of continuous power to support a fully loaded 3090ti at 0.712v & 1600mhz.

This also means that if you want to reduce the power-consumption of your card the better approach is simply to reduce the max voltage and accept that you will not have a stable power-consumption. That might also help to reduce the transient loads since the maximum height of the peaks will depend on the max voltage, this might still be worse if you are PSU/cooling constrained though. It also means the ability to increase the fps/W is rather limited.
 

Admin

Administrator
Moderator
Messages
2,915
#3
How intel ruined emulation performance
Intel has since alder-lake switched to a hybrid architecture setup where they mix strong p-cores with weak e-cores. The concept of mixing strong and weak cores by itself is not too bad but the issue is that the intel CPUs simply does not offer enough performance cores due to them abandoning high-end desktop. Another big issue is that the e-cores doesn't have avx-512 and instead of making sure only the p-cores would do the avx-512 instructions intel just disabled it from the factory and later starting physically fusing it off for no good reason. This however id disastrous for emulation performance.

wccftech.com/amd-zen-4-avx-512-major-benefits-in-emulators-such-as-yuzu-citra-xenia-vita3k/

It's unclear how much this is due to lazy engineering and how much it is about market segmentation, in any way they are doing a good job losing to AMD. It's probably too late for intel to re-enable avx-512 properly on raptor-lake by now (even though it's badly needed).
Experts in the area have pointed out that technically the chip could be designed to catch the error and hand off the thread to the right core, but Intel hasn’t done this here as it adds complexity. By disabling AVX-512 in Alder Lake, it means that both the P-cores and the E-cores have a unified common instruction set, and they can both run all software supported on either.
www.anandtech.com/show/17047/the-intel-12th-gen-core-i912900k-review-hybrid-performance-brings-hybrid-complexity/2
 

Admin

Administrator
Moderator
Messages
2,915
#4
AMD chiplets
For unclear reasons AMD like to shove non-monolitic designs down the throats of people. The only potential advantage with not having a monolithic design is slightly lower manufacturing cost.

While the defect-rate per die will increase with the die-size that does not mean that you will have to throw away a lot of dies if you do not go for a monolithic design, you can simply disable defective parts of the die and sell as a cheaper model (such as 3080ti instead of 3090ti). Nvidia in particular has been very effective when it comes to taking a good usage of defective dies rather than just throwing them away.

One big issue with having chiplets instead of monolithic dies is that it adds latency but there can also be significant added power-consumption due to the interconnections required. Almost all zen2 CPUs suffered significant performance loss (around 6%) due to L3 cahce not being unified.

The notion that smaller chips would be cheaper relative to the size isn't reflected in GPU prices, we actually see the opposite with GA102 chips giving the most active transistors/$. The worst value GPU is actually the 3070ti based on the smaller ga104
 

Admin

Administrator
Moderator
Messages
2,915
#5
xbox consoles
Microsoft has a proven track-record of screwing up their hardware designs with the only real exceptions being the original xbox and the xbox one x.

https://www.youtube.com/watch?v=YzIAYszH_X4

The original xbox one came with a very weak GPU and was very clearly a much worse option than PS4. The xbox fanboys used the games promised for the system to build hype but all of these later ended up on PC (where you got multiple times better performance).


They sortoff learned from their mistake with the xbox series X but it's still not great hardware design since they ended up using a much larger chip while not actually getting any clear overall performance-advantage over the PS5. They also don't have proper gen4 NVME storage and forces their customers to buy a much more expensive proprietary card for more than double the price, for more than double the price you of course also get more than double the time it takes for the card to read/write data.

Their controller lack important features such as gyro aiming and to a lesser extent haptic feedback.
 
Top