You’d think that the failure of 20th century command economies would have showcased the danger of “my experiments run contrary to natural results, it is obviously the natural results at fault” but, guess not.
Rule of thumb is that if your experiment doesn’t match popular wisdom, especially in complex things like an rts where tons of variables exist to account for, it’s because of externalities you disregarded for the sake of making your experiment easier.
Case in point:
I sandbox mass equivalent ints vs swift winds and ints win. This means that swift winds are useless units and need to be buffed. They can’t beat the unit they are intended to replace in a fair fight.
Except they don’t because things like speed, engagement control, snowball dps loss, factory scale to match mass investment, etc factor into discussing units. Anyone would call you nuts for suggesting a buff to swift winds.
You’re free to get to 1800 crushing people with battlecruisers when you’re facing battleships, but there’s a reason this isn’t meta. Battleships scale significantly better and bc + shield combo very quickly has depreciating returns.
You can say “experience is irrelevant” but it isn’t. Any reasonable study accounts for discrepancies between experimental conditions and natural conditions and how these can lead to different things being efficient. There is no replay of someone high level putting BCs against BS because it sucks, straight up. Even your test is weird and has this shield boat that doesn’t get sniped + doesn’t account for how bc needs to get in range of bs frigates which can keep pushing the bc away while the BS does free damage.
I don’t care about showing a replay because it’s just shit you see in any t3 navy game. Sentons/Metir/Lena River/whatever, waste of time to go prove it. Go climb 1000 rating disregarding BS and showcase yourself how you found the new meta.