Navigation

    FAForever Forums
    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    1. Home
    2. Softles
    S

    Softles

    @Softles

    60
    Reputation
    19
    Posts
    12
    Profile views
    0
    Followers
    0
    Following
    Joined Last Online

    • Profile
    • More
      • Following
      • Followers
      • Topics
      • Posts
      • Best
      • Groups
    Softles Follow

    Best posts made by Softles

    Weekly AI Tourney Series

    As of last week, I am running a weekly AI only tournament to put new and old AIs through their paces and see who comes out on top!

    Each week I'm planning to change up the format; I'll be doing stuff like 1v1s, teamgames, FFAs, modded games, and no rush timers - let me know what you'd all like to see the AIs play 🙂

    I'll be posting the results of each week in this thread, so stay tuned for updates.

    PS: If anybody wants to enter an AI just let me know, all AIs are welcome!

    posted in AI development •
    RE: About the veterancy system

    I did a quick couple of tests using my custom profiler that I developed originally for testing my AI, and found that in normal conditions the % of time spend in the two veterancy functions in question did not exceed ~0.2% of total sim compute time. This test was done in a 4v4 between my AIs, which are currently very land focussed.

    In ASF vs ASF tests I was able to get it up to at most ~2% of total sim compute time, but during the periods of negative sim speed it was at most ~0.7%.

    ==

    Overall my conclusion is that the rest of the sim scales worse than these veterancy functions, and they have a negligible effect most of the time. By all means feel free to change the vet system, but it probably doesn't need to be for performance reasons.

    ==

    Cautionary note: this method for profiling may not give completely accurate results for many calls of a low cost function due to the error associated with getting system time and the precision of system time provided. If anyone wants to jump in and check how these errors might have affected it go right ahead 🙂

    For completeness, a link to my profiling code and my hook of the Unit class for testing:

    YeOldeUnitThingy = Unit
    
    local PROFILER = import('/mods/DilliDalli/lua/AI/DilliDalli/Profiler.lua').GetProfiler()
    
    Unit = Class(YeOldeUnitThingy) {
        VeterancyDispersal = function(self, suicide)
            local start = PROFILER:Now()
            YeOldeUnitThingy.VeterancyDispersal(self, suicide)
            PROFILER:Add("VeterancyDispersal",PROFILER:Now()-start)
        end,
        
        DoTakeDamage = function(self, instigator, amount, vector, damageType)
            local start = PROFILER:Now()
            YeOldeUnitThingy.DoTakeDamage(self, instigator, amount, vector, damageType)
            PROFILER:Add("DoTakeDamage",PROFILER:Now()-start)
        end,
    }
    

    Some output from tests:

    4v4 AI match, ~15 mins ~1k units on the map:
    info: Time per game second:0.32s, Top Costs: DoTakeDamage-0.16%, VeterancyDispersal-0.01%, 
    info: Time per game second:0.34s, Top Costs: DoTakeDamage-0.14%, VeterancyDispersal-0.02%,
    info: Time per game second:0.31s, Top Costs: DoTakeDamage-0.15%, VeterancyDispersal-0.01%, 
    
    100 vs 100 ASF:
    INFO: Time per game second:0.17s, Top Costs: DoTakeDamage-0.65%, VeterancyDispersal-0.09%, 
    INFO: Time per game second:0.17s, Top Costs: DoTakeDamage-0.97%, VeterancyDispersal-0.11%, 
    INFO: Time per game second:0.17s, Top Costs: DoTakeDamage-0.75%, VeterancyDispersal-0.11%, 
    INFO: Time per game second:0.17s, Top Costs: DoTakeDamage-0.58%, VeterancyDispersal-0.11%,
    
    1000 vs 1000 ASF:
    INFO: Time per game second:17.49s, Top Costs: DoTakeDamage-0.14%, VeterancyDispersal-0.00%, 
    INFO: Time per game second:4.23s, Top Costs: DoTakeDamage-0.24%, VeterancyDispersal-0.02%, 
    INFO: Time per game second:2.60s, Top Costs: DoTakeDamage-0.47%, VeterancyDispersal-0.06%, 
    INFO: Time per game second:1.98s, Top Costs: DoTakeDamage-0.71%, VeterancyDispersal-0.09%, 
    INFO: Time per game second:1.26s, Top Costs: DoTakeDamage-1.17%, VeterancyDispersal-0.14%, 
    INFO: Time per game second:0.92s, Top Costs: DoTakeDamage-1.07%, VeterancyDispersal-0.16%, 
    INFO: Time per game second:0.59s, Top Costs: DoTakeDamage-2.25%, VeterancyDispersal-0.15%, 
    INFO: Time per game second:0.44s, Top Costs: DoTakeDamage-1.25%, VeterancyDispersal-0.18%,
    
    posted in Balance Discussion •
    DilliDalli - 1v1 specialist AI

    TLDR: Please play vs my new DilliDalli AI and let me know how it goes!

    Hi everyone - just wanted to make a quick post to officially announce my DilliDalli AI, which I am hoping can start to give humans a challenge in 1v1 situations (without the use of cheat multipliers / vision).

    The AI is in the mod vault so feel free to give it a go if that sounds interesting! I'd love to use this thread to announce improvements I make, and also to get feedback from you guys. Any replays IDs you have playing my AI are super valuable - I can automate AI vs AI games but there's nothing like testing against the real deal.

    Right now I'd give it ~600 rating for 5x5 and 10x10 maps, but I'm really not too sure (which is where you can all help out).

    Under the hood it's a complete AI framework rewrite (~0 code overlap with anything else AI after initialisation), which (I believe) gives me the flexibility to tune up some more competitive behaviours. Watch this space because there's all sort of improvements in the pipeline (like actually teaching it how to move air units). Feel free to ask questions about all that too if you're interested, probably going to make some separate posts at some point talking about it in depth and trying to get more people involved in development.

    posted in AI development •
    RE: Weekly AI Tourney Series

    26th November 2021 - Edition 1 - 1v1 fight club
    In the first edition of the tourney, I ran a 1v1 competition on the current 1v1 ladder map pool for 200 - 700 rated players (not including the mapgen though). Rounds were played with each AI playing every other AI on every map in the pool each round. At the end of each round, the lowest scoring AI was eliminated from the next round, leading to 5 rounds in total for the 6 entered AIs:

    • Adaptive AI
    • Sorian AI Adaptive
    • Dilli AI - last update: 2018/06/10
    • DilliDalli AI - last update: 2021/07/27
    • Swarm AI - last update: 2021/11/26
    • M27AI - last update: 2021/11/19

    The scores for each AI in each round were 0 points per loss, 1 point per draw, and 3 points per win.

    AI Round 1 Round 2 Round 3 Round 4
    DilliDalli 112 80 50 31
    M27 79 67 43 25
    Dilli 59 38 30 15
    Swarm 50 32 17 X
    Sorian Adaptive 35 15 X X
    Adaptive 13 X X X

    In the final round, DilliDalli and M27 won 4/8 games each, leading to a tied competition!

    • DilliDalli won on Eye of the Storm, Floralis, Forbidden Pass, and Open Palms.
    • M27 won on Auburn Canyon, The Ganges Chasma, Pelagial, and Theta Passage.

    To give an insight on where each AI was picking up points, I also worked out the win rate for each AI on each map (number of games varies per AI since they were eliminated at different times):

    AI Auburn Canyon Eye of the Storm Floralis Forbidden Pass The Ganges Chasma Open Palms Pelagial Theta Passage
    Adaptive 20% 0% 20% 0% 0% 0% 0% 0%
    Dilli 43% 43% 50% 64% 43% 36% 29% 7%
    DilliDalli 67% 73% 100% 80% 67% 100% 67% 67%
    M27 100% 7% 60% 40% 93% 60% 20% 100%
    Sorian Adaptive 0% 22% 11% 0% 22% 11% 67% 33%
    Swarm 25% 17% 17% 33% 25% 33% 42% 50%
    posted in AI development •
    RE: Weekly AI Tourney Series

    3rd December 2021 - Edition 2 - 2v2 double trouble
    This week the AIs played 2v2s on the <1000 rating TMM pool, with full share active. A similar format was followed to last week, except that 2 AIs were eliminated each round (in order to reduce the total number of games that had to be run). 6 AIs were entered this week:

    • Adaptive AI
    • Sorian AI Adaptive
    • Dilli AI - last update: 2018/06/10
    • DilliDalli AI - last update: 2021/07/27
    • M27AI - last update: 2021/11/19
    • RNG AI -

    As before, any game that reached 45 mins was ended and called as a draw.

    The scores in each round were calculated as 3 points per win and 1 point per draw, with the round results as follows:

    AI Round 1 Round 2 Round 3
    RNG 103 53 13
    M27 98 53 13
    DilliDalli 110 46 X
    Dilli 73 20 X
    Adaptive 24 X X
    Sorian Adaptive 18 X X

    Giving us another tied tournament, with #1 spot shared between M27AI and RNG AI!

    These results masked some interesting individual strengths and weaknesses for each of the AIs, including our two winners:

    M27 AI:

    • Great ACU duelling, superb t3 land rushes, and effectively used gifted bases if teammates died.
    • Struggled on water maps (as well as serenity due to a bug), and risky plays with the ACU left it vulnerable.

    RNG AI:

    • Great all round, particularly outclassing other AIs on the water maps. Solid base construction made it hard to beat.
    • Struggled for map control on land focused maps, and doesn't yet make use of gifted allied bases on full share.

    Now for some extra stats for your viewing pleasure..
    AI head to head results across all rounds (cells are from the perspective of the row AI):

    RNG M27AI DilliDalli Dilli Adaptive Sorian Adaptive
    RNG ------ 10W, 9D, 11L 5W, 3D, 12L 17W, 1D, 2L 10W, 0D, 0L 10W, 0D, 0L
    M27AI 11W, 9D, 10L ------ 12W, 3D, 5L 14W, 2D, 4L 4W, 5D, 1L 7W, 1D, 2L
    DilliDalli 12W, 3D, 5L 5W, 3D, 12L ------ 14W, 2D, 4L 8W, 1D, 1L 10W, 0D, 0L
    Dilli 2W, 1D, 17L 4W, 2D, 14L 4W, 2D, 14L ------ 9W, 1D, 0L 10W, 0D, 0L
    Adaptive 0W, 0D, 10L 1W, 5D, 4L 1W, 1D, 8L 0W, 1D, 9L ------ 1W, 8D, 1L
    Sorian Adaptive 0W, 0D, 10L 2W, 1D, 7L 0W, 0D, 10L 0W, 0D, 10L 1W, 8D, 1L ------

    Special credit to Dilli and DilliDalli for perfect records against Sorian Adaptive, as well as RNG for perfect records against both Adaptive and Sorian Adaptive.

    Map stats for each AI:

    Map RNG M27AI DilliDalli Dilli Adaptive Sorian Adaptive
    Adaptive Meadow 6W, 2D, 1L 4W, 2D, 3L 3W, 1D, 4L 5W, 0D, 3L 0W, 2D, 3L 0W, 1D, 4L
    Angel Lagoon 7W, 2D, 0L 2W, 4D, 3L 3W, 2D, 3L 1W, 3D, 4L 0W, 3D, 2L 2W, 0D, 3L
    Charity 5W, 1D, 3L 6W, 1D, 2L 6W, 1D, 1L 2W, 0D, 6L 0W, 2D, 3L 0W, 1D, 4L
    Desert Planet II v2 5W, 1D, 3L 7W, 2D, 0L 5W, 1D, 2L 2W, 0D, 6L 0W, 1D, 4L 0W, 1D, 4L
    Nomadiah 4W, 1D, 4L 8W, 1D, 0L 5W, 1D, 2L 2W, 0D, 6L 0W, 2D, 3L 0W, 1D, 4L
    Pelagial v2 7W, 2D, 0L 2W, 4D, 3L 1W, 2D, 5L 3W, 3D, 2L 1W, 2D, 2L 1W, 1D, 3L
    Strife of Titan 3W, 2D, 4L 5W, 2D, 2L 7W, 0D, 1L 4W, 0D, 4L 0W, 1D, 4L 0W, 1D, 4L
    Serenity Desert 7W, 0D, 2L 0W, 1D, 8L 8W, 0D, 0L 4W, 0D, 4L 2W, 0D, 3L 0W, 1D, 4L
    Syrtis Major 4W, 0D, 5L 8W, 0D, 1L 5W, 0D, 3L 4W, 0D, 4L 0W, 1D, 4L 0W, 1D, 4L
    Turtle Rocks 4W, 2D, 3L 6W, 3D, 0L 6W, 1D, 1L 2W, 0D, 6L 0W, 1D, 4L 0W, 1D, 4L

    Special credit to DilliDalli for the 100% record on Serenity Desert.

    As an extra special treat, I also recorded performance over time stats for each of the games, and averaged over all their games to get an idea of each AI's in game performance (i.e. how quickly does it run in game). This stat isn't perfect, since it doesn't control for how performant opponents were in their games, but it gives a rough idea.
    tmp.png
    (perf here is measured as #real seconds per 10 game seconds while speed is set to +10, with full +10 roughly meaning a perf of 1.5 and +0 roughly meaning a perf of 13)

    Hope people find this interesting - tune in next week for 4v4s, 5v5s and 6v6s on classic team-game maps. As ever feel free to drop in ideas for future weeks or anything else 🙂

    posted in AI development •
    RE: Supreme Computer Cup

    AI vs AI game results are as follows:

    Place AI Points
    1 RNG Standard 47
    2 DilliDalli 43.5
    3 Swarm Terror 32.5
    4 SACU AI 22.5
    5 Sorian Adaptive 19.5
    6 Uveso Adaptive 16
    7 Sorian Edit Adaptive 8

    Head to head results:
    (AI by row, opponent by column, points are "for the AI, against the opponent")

    AI\Opponent DilliDalli RNG Standard SACU AI Sorian Adaptive Sorian Edit Adaptive Swarm Terror Uveso Adaptive
    DilliDalli - 4.5 6.5 7.5 9 7 9
    RNG Standard 4.5 - 8 8 9 8.5 9
    SACU AI 2.5 1 - 5.5 7 3 3.5
    Sorian Adaptive 1.5 1 3.5 - 7 1 5
    Sorian Edit Adaptive 0 0 2 1 - 1 3.5
    Swarm Terror 2 0.5 6 8 8 - 8
    Uveso Adaptive 0 0 5.5 4 5.5 1 -

    Raw (JSON formatted) results as output from my script: results.txt

    posted in Tournaments •
    RE: Weekly AI Tourney Series

    11th December 2021 - Edition 3 - 4v4+: you can't spell teaim without AI.

    This week the AIs were playing on a selection of team game maps to find out who you should bring to back you up in a com fight:

    • Round 1: Tabula Rasa v3 (4v4) and The Pyramid 5v5 (5v5)
    • Round 2: Hilly Plateau (4v4) and Diversity (4v4)
    • Round 3: Canis 4v4 spezial edition (4v4) and Adaptive Wonder Open (8v8)

    Rules this week were share until death and a 2 hour game limit (which none of the games reached). I'll just include the win % in the tables below since there were no draws this week.

    Each AI played every other AI on every map once in round 1, twice in round 2, and 4 times in round 3. Without further ado, here were the results:

    AI Round 1 Round 2 Round 3
    RNG Standard 60% 58% 87%
    DilliDalli 90% 75% 13%
    M27AI 70% 33% X
    Dilli 60% 33% X
    Sorian Edit Adaptive 10% X X
    Swarm 10% X X

    Congrats to RNG Standard for the first outright win of the series!

    The head to head and per map results were as follows:

    RNG Standard DilliDalli M27AI Dilli Sorian Edit Adaptive Swarm
    RNG Standard - 64% 33% 83% 100% 100%
    DilliDalli 36% - 100% 67% 100% 100%
    M27AI 67% 0% - 50% 100% 100%
    Dilli 17% 33% 50% - 100% 100%
    Sorian Edit Adaptive 0% 0% 0% 0% - 50%
    Swarm 0% 0% 0% 0% 50% -
    RNG Standard DilliDalli M27AI Dilli Sorian Edit Adaptive Swarm
    Tabula Rasa 60% 80% 60% 80% 0% 20%
    The Pyramid 60% 100% 80% 40% 20% 0%
    Hilly Plateau 33% 100% 67% 0% X X
    Diversity 83% 50% 0% 67% X X
    Canis 100% 0% X X X X
    Wonder 75% 25% X X X X

    This week I also pulled performance stats for each AI against itself in the two round 1 maps, which produced the results below:
    tmp.png
    Again for reference, a performance of X => X/10 real seconds per game second (trying to run at +10 speed). Results plotted against the number of units on the map.

    Congrats again to RNG Standard for winning, and remember tune in next week for some Free For All chaos!

    posted in AI development •
    RE: Weekly AI Tourney Series

    22nd January 2022 - Edition 4 - 1v1s: Cry me a river

    After a festive break we are back with a bumper edition of the AI tourney - 11 AIs entered, fighting it out across 5 rounds (round robin) of 1v1s on the following maps:

    • Twin Rivers (round 1)
    • Twin Rivers (round 2)
    • Twin Rivers (round 3)
    • Twin Rivers (round 4)
    • Twin Rivers (round 5)

    The AIs entered for this week are:
    Adaptive AI, Dalli AI, Dilli AI, DilliDalli AI, M27 AI, RNG Standard AI, SCTA Arm, SCTA Core, Sorian Adaptive AI, Sorian Edit Adaptive AI, Swarm Terror AI.

    Format
    The twist is that each round AIs are granted bonus cheat multipliers based on the previous round's results. In brief, an AI gets a +0.1 boost to resources and build rate for every position down the leaderboard it was in the last round - and accumulates these bonuses as the tournament goes on.

    To stop these boosts getting out of hand in the matches, the boosts applied in game are scaled down based on the lower of the boosts between two AIs. For example if AI Alpha with boost 2.0 is playing AI Bravo with boost 2.4, then the applied settings would be a 1.0x multiplier for Alpha (2.0/2.0) and a 1.2x multiplier for Bravo (2.4/2.0). Applied boosts are rounded to the nearest 0.1 due to UI limitations.

    The aim is that as the rounds progress, we get a better and better idea of what boosts each AI needs to be an equal match for every other AI (kinda like the AI's handicap). The AI with the lowest boost at the end of the tourney wins!

    Matches:
    M: Accumulated Modifier going into the round
    W: Wins that round
    P: End of round placement (* joint)

    Round 1 Round 2 Round 3 Round 4 Round 5
    AI M W P M W P M W P M W P M W P
    Adaptive 1.0 3 8 1.7 7 3* 1.9 0 11 2.9 6 2* 3.0 2 9*
    Dalli 1.0 9 1* 1.0 2 10 1.9 9 2 2.0 5 7* 2.6 9 2
    Dilli 1.0 8 3 1.2 3 8* 1.9 8 3 2.1 6 2* 2.2 2 9*
    DilliDalli 1.0 6 4* 2.3 7 3* 1.5 4 6* 2.3 2 11 2.9 10 1
    M27 1.0 5 6* 1.5 9 1 1.5 3 9 2.3 5 7* 2.9 6 4*
    RNG Standard 1.0 9 1* 1.0 0 11 2.0 10 1 2.0 6 2* 2.1 4 6*
    SCTA Arm 1.0 2 9 1.8 5 6* 2.3 4 6* 2.8 3 9* 3.6 7 3
    SCTA Core 1.0 1 10* 2.9 8 2 2.0 2 10 2.9 6 2* 3.0 4 6*
    Sorian Adaptive 1.0 1 10* 1.9 6 5 2.3 6 4 2.6 3 9* 3.4 6 4*
    Sorian Edit 1.0 6 4* 1.3 3 8* 2.0 5 5 2.4 6 2* 2.5 3 8
    Swarm Terror 1.0 5 6* 1.5 5 6* 2.0 4 6* 2.5 7 1 2.5 2 9*

    Results:
    For a final set of modifiers at (raw, normalised to nearest 0.1):
    2.6, 1.0: RNG Standard
    2.7, 1.0: Dalli
    3.0, 1.2: Dilli, DilliDalli
    3.2, 1.2: M27, Sorian Edit
    3.3, 1.3: Swarm Terror
    3.5, 1.3: SCTA Core
    3.7, 1.4: Sorian Adaptive
    3.8, 1.5: Adaptive, SCTA Arm

    Congrats to RNG Standard AI for winning, commiserations to everyone else!

    For fun, here's a view of the normalised modifiers (normalised compared to leading AI that round) each AI had as the tourney progressed:
    tmp.png

    I think the scores would have continued to settle down a little bit if I'd kept doing more rounds, but after manually running 275 games in a row I didn't feel like anymore...

    --

    The particularly sharp amongst you may have noticed that the weekly tourney hasn't exactly been weekly lately - and going forwards this will become a less regular thing (maybe monthly?) so that I can focus on developing AIs to enter.

    See you all in the next one!

    posted in AI development •
    RE: Do not add new colors - discussion

    Could we simply have a selection of different colour palettes that can be toggled between by the user? There are a few different kinds of colour blindness, so having a colour palette designed to provide maximum contrast for each kind of colour blindness would be ideal (it's not a one sized fits all thing).

    i.e. there are 20 colours in the game now, have an option that swaps these colours like for like with a set of 20 high contrast colours depending on your kind of colour blindness.

    posted in General Discussion •
    RE: How open Supreme Commander FA Lobby (Skirmish) via command line?

    I think something like this is probably what you're looking for:

    C:\ProgramData\FAForever\bin\ForgedAlliance.exe /init  init_faf.lua /nobugreport
    

    This will launch the game with the FAF modifications enabled, at which point you can open a skirmish lobby.

    posted in Contribution •

    Latest posts made by Softles

    RE: Do not add new colors - discussion

    Could we simply have a selection of different colour palettes that can be toggled between by the user? There are a few different kinds of colour blindness, so having a colour palette designed to provide maximum contrast for each kind of colour blindness would be ideal (it's not a one sized fits all thing).

    i.e. there are 20 colours in the game now, have an option that swaps these colours like for like with a set of 20 high contrast colours depending on your kind of colour blindness.

    posted in General Discussion •
    RE: How open Supreme Commander FA Lobby (Skirmish) via command line?

    I think something like this is probably what you're looking for:

    C:\ProgramData\FAForever\bin\ForgedAlliance.exe /init  init_faf.lua /nobugreport
    

    This will launch the game with the FAF modifications enabled, at which point you can open a skirmish lobby.

    posted in Contribution •
    RE: Weekly AI Tourney Series

    22nd January 2022 - Edition 4 - 1v1s: Cry me a river

    After a festive break we are back with a bumper edition of the AI tourney - 11 AIs entered, fighting it out across 5 rounds (round robin) of 1v1s on the following maps:

    • Twin Rivers (round 1)
    • Twin Rivers (round 2)
    • Twin Rivers (round 3)
    • Twin Rivers (round 4)
    • Twin Rivers (round 5)

    The AIs entered for this week are:
    Adaptive AI, Dalli AI, Dilli AI, DilliDalli AI, M27 AI, RNG Standard AI, SCTA Arm, SCTA Core, Sorian Adaptive AI, Sorian Edit Adaptive AI, Swarm Terror AI.

    Format
    The twist is that each round AIs are granted bonus cheat multipliers based on the previous round's results. In brief, an AI gets a +0.1 boost to resources and build rate for every position down the leaderboard it was in the last round - and accumulates these bonuses as the tournament goes on.

    To stop these boosts getting out of hand in the matches, the boosts applied in game are scaled down based on the lower of the boosts between two AIs. For example if AI Alpha with boost 2.0 is playing AI Bravo with boost 2.4, then the applied settings would be a 1.0x multiplier for Alpha (2.0/2.0) and a 1.2x multiplier for Bravo (2.4/2.0). Applied boosts are rounded to the nearest 0.1 due to UI limitations.

    The aim is that as the rounds progress, we get a better and better idea of what boosts each AI needs to be an equal match for every other AI (kinda like the AI's handicap). The AI with the lowest boost at the end of the tourney wins!

    Matches:
    M: Accumulated Modifier going into the round
    W: Wins that round
    P: End of round placement (* joint)

    Round 1 Round 2 Round 3 Round 4 Round 5
    AI M W P M W P M W P M W P M W P
    Adaptive 1.0 3 8 1.7 7 3* 1.9 0 11 2.9 6 2* 3.0 2 9*
    Dalli 1.0 9 1* 1.0 2 10 1.9 9 2 2.0 5 7* 2.6 9 2
    Dilli 1.0 8 3 1.2 3 8* 1.9 8 3 2.1 6 2* 2.2 2 9*
    DilliDalli 1.0 6 4* 2.3 7 3* 1.5 4 6* 2.3 2 11 2.9 10 1
    M27 1.0 5 6* 1.5 9 1 1.5 3 9 2.3 5 7* 2.9 6 4*
    RNG Standard 1.0 9 1* 1.0 0 11 2.0 10 1 2.0 6 2* 2.1 4 6*
    SCTA Arm 1.0 2 9 1.8 5 6* 2.3 4 6* 2.8 3 9* 3.6 7 3
    SCTA Core 1.0 1 10* 2.9 8 2 2.0 2 10 2.9 6 2* 3.0 4 6*
    Sorian Adaptive 1.0 1 10* 1.9 6 5 2.3 6 4 2.6 3 9* 3.4 6 4*
    Sorian Edit 1.0 6 4* 1.3 3 8* 2.0 5 5 2.4 6 2* 2.5 3 8
    Swarm Terror 1.0 5 6* 1.5 5 6* 2.0 4 6* 2.5 7 1 2.5 2 9*

    Results:
    For a final set of modifiers at (raw, normalised to nearest 0.1):
    2.6, 1.0: RNG Standard
    2.7, 1.0: Dalli
    3.0, 1.2: Dilli, DilliDalli
    3.2, 1.2: M27, Sorian Edit
    3.3, 1.3: Swarm Terror
    3.5, 1.3: SCTA Core
    3.7, 1.4: Sorian Adaptive
    3.8, 1.5: Adaptive, SCTA Arm

    Congrats to RNG Standard AI for winning, commiserations to everyone else!

    For fun, here's a view of the normalised modifiers (normalised compared to leading AI that round) each AI had as the tourney progressed:
    tmp.png

    I think the scores would have continued to settle down a little bit if I'd kept doing more rounds, but after manually running 275 games in a row I didn't feel like anymore...

    --

    The particularly sharp amongst you may have noticed that the weekly tourney hasn't exactly been weekly lately - and going forwards this will become a less regular thing (maybe monthly?) so that I can focus on developing AIs to enter.

    See you all in the next one!

    posted in AI development •
    RE: Weekly AI Tourney Series

    11th December 2021 - Edition 3 - 4v4+: you can't spell teaim without AI.

    This week the AIs were playing on a selection of team game maps to find out who you should bring to back you up in a com fight:

    • Round 1: Tabula Rasa v3 (4v4) and The Pyramid 5v5 (5v5)
    • Round 2: Hilly Plateau (4v4) and Diversity (4v4)
    • Round 3: Canis 4v4 spezial edition (4v4) and Adaptive Wonder Open (8v8)

    Rules this week were share until death and a 2 hour game limit (which none of the games reached). I'll just include the win % in the tables below since there were no draws this week.

    Each AI played every other AI on every map once in round 1, twice in round 2, and 4 times in round 3. Without further ado, here were the results:

    AI Round 1 Round 2 Round 3
    RNG Standard 60% 58% 87%
    DilliDalli 90% 75% 13%
    M27AI 70% 33% X
    Dilli 60% 33% X
    Sorian Edit Adaptive 10% X X
    Swarm 10% X X

    Congrats to RNG Standard for the first outright win of the series!

    The head to head and per map results were as follows:

    RNG Standard DilliDalli M27AI Dilli Sorian Edit Adaptive Swarm
    RNG Standard - 64% 33% 83% 100% 100%
    DilliDalli 36% - 100% 67% 100% 100%
    M27AI 67% 0% - 50% 100% 100%
    Dilli 17% 33% 50% - 100% 100%
    Sorian Edit Adaptive 0% 0% 0% 0% - 50%
    Swarm 0% 0% 0% 0% 50% -
    RNG Standard DilliDalli M27AI Dilli Sorian Edit Adaptive Swarm
    Tabula Rasa 60% 80% 60% 80% 0% 20%
    The Pyramid 60% 100% 80% 40% 20% 0%
    Hilly Plateau 33% 100% 67% 0% X X
    Diversity 83% 50% 0% 67% X X
    Canis 100% 0% X X X X
    Wonder 75% 25% X X X X

    This week I also pulled performance stats for each AI against itself in the two round 1 maps, which produced the results below:
    tmp.png
    Again for reference, a performance of X => X/10 real seconds per game second (trying to run at +10 speed). Results plotted against the number of units on the map.

    Congrats again to RNG Standard for winning, and remember tune in next week for some Free For All chaos!

    posted in AI development •
    RE: Weekly AI Tourney Series

    Also let me know how you want results tables formatted in the future - should I stick with {Wins, Draws, Losses}, swap to {points} or go back to {win %} in the head to head and map results tables? (or something else entirely??)

    posted in AI development •
    RE: Weekly AI Tourney Series

    3rd December 2021 - Edition 2 - 2v2 double trouble
    This week the AIs played 2v2s on the <1000 rating TMM pool, with full share active. A similar format was followed to last week, except that 2 AIs were eliminated each round (in order to reduce the total number of games that had to be run). 6 AIs were entered this week:

    • Adaptive AI
    • Sorian AI Adaptive
    • Dilli AI - last update: 2018/06/10
    • DilliDalli AI - last update: 2021/07/27
    • M27AI - last update: 2021/11/19
    • RNG AI -

    As before, any game that reached 45 mins was ended and called as a draw.

    The scores in each round were calculated as 3 points per win and 1 point per draw, with the round results as follows:

    AI Round 1 Round 2 Round 3
    RNG 103 53 13
    M27 98 53 13
    DilliDalli 110 46 X
    Dilli 73 20 X
    Adaptive 24 X X
    Sorian Adaptive 18 X X

    Giving us another tied tournament, with #1 spot shared between M27AI and RNG AI!

    These results masked some interesting individual strengths and weaknesses for each of the AIs, including our two winners:

    M27 AI:

    • Great ACU duelling, superb t3 land rushes, and effectively used gifted bases if teammates died.
    • Struggled on water maps (as well as serenity due to a bug), and risky plays with the ACU left it vulnerable.

    RNG AI:

    • Great all round, particularly outclassing other AIs on the water maps. Solid base construction made it hard to beat.
    • Struggled for map control on land focused maps, and doesn't yet make use of gifted allied bases on full share.

    Now for some extra stats for your viewing pleasure..
    AI head to head results across all rounds (cells are from the perspective of the row AI):

    RNG M27AI DilliDalli Dilli Adaptive Sorian Adaptive
    RNG ------ 10W, 9D, 11L 5W, 3D, 12L 17W, 1D, 2L 10W, 0D, 0L 10W, 0D, 0L
    M27AI 11W, 9D, 10L ------ 12W, 3D, 5L 14W, 2D, 4L 4W, 5D, 1L 7W, 1D, 2L
    DilliDalli 12W, 3D, 5L 5W, 3D, 12L ------ 14W, 2D, 4L 8W, 1D, 1L 10W, 0D, 0L
    Dilli 2W, 1D, 17L 4W, 2D, 14L 4W, 2D, 14L ------ 9W, 1D, 0L 10W, 0D, 0L
    Adaptive 0W, 0D, 10L 1W, 5D, 4L 1W, 1D, 8L 0W, 1D, 9L ------ 1W, 8D, 1L
    Sorian Adaptive 0W, 0D, 10L 2W, 1D, 7L 0W, 0D, 10L 0W, 0D, 10L 1W, 8D, 1L ------

    Special credit to Dilli and DilliDalli for perfect records against Sorian Adaptive, as well as RNG for perfect records against both Adaptive and Sorian Adaptive.

    Map stats for each AI:

    Map RNG M27AI DilliDalli Dilli Adaptive Sorian Adaptive
    Adaptive Meadow 6W, 2D, 1L 4W, 2D, 3L 3W, 1D, 4L 5W, 0D, 3L 0W, 2D, 3L 0W, 1D, 4L
    Angel Lagoon 7W, 2D, 0L 2W, 4D, 3L 3W, 2D, 3L 1W, 3D, 4L 0W, 3D, 2L 2W, 0D, 3L
    Charity 5W, 1D, 3L 6W, 1D, 2L 6W, 1D, 1L 2W, 0D, 6L 0W, 2D, 3L 0W, 1D, 4L
    Desert Planet II v2 5W, 1D, 3L 7W, 2D, 0L 5W, 1D, 2L 2W, 0D, 6L 0W, 1D, 4L 0W, 1D, 4L
    Nomadiah 4W, 1D, 4L 8W, 1D, 0L 5W, 1D, 2L 2W, 0D, 6L 0W, 2D, 3L 0W, 1D, 4L
    Pelagial v2 7W, 2D, 0L 2W, 4D, 3L 1W, 2D, 5L 3W, 3D, 2L 1W, 2D, 2L 1W, 1D, 3L
    Strife of Titan 3W, 2D, 4L 5W, 2D, 2L 7W, 0D, 1L 4W, 0D, 4L 0W, 1D, 4L 0W, 1D, 4L
    Serenity Desert 7W, 0D, 2L 0W, 1D, 8L 8W, 0D, 0L 4W, 0D, 4L 2W, 0D, 3L 0W, 1D, 4L
    Syrtis Major 4W, 0D, 5L 8W, 0D, 1L 5W, 0D, 3L 4W, 0D, 4L 0W, 1D, 4L 0W, 1D, 4L
    Turtle Rocks 4W, 2D, 3L 6W, 3D, 0L 6W, 1D, 1L 2W, 0D, 6L 0W, 1D, 4L 0W, 1D, 4L

    Special credit to DilliDalli for the 100% record on Serenity Desert.

    As an extra special treat, I also recorded performance over time stats for each of the games, and averaged over all their games to get an idea of each AI's in game performance (i.e. how quickly does it run in game). This stat isn't perfect, since it doesn't control for how performant opponents were in their games, but it gives a rough idea.
    tmp.png
    (perf here is measured as #real seconds per 10 game seconds while speed is set to +10, with full +10 roughly meaning a perf of 1.5 and +0 roughly meaning a perf of 13)

    Hope people find this interesting - tune in next week for 4v4s, 5v5s and 6v6s on classic team-game maps. As ever feel free to drop in ideas for future weeks or anything else 🙂

    posted in AI development •
    RE: Weekly AI Tourney Series

    @Femboy sure - but what would I do with a banner?

    @Dragun101 no plans to include SCTA every week, but it would definitely make a good feature for one of the weeks 🙂

    posted in AI development •
    RE: Weekly AI Tourney Series

    26th November 2021 - Edition 1 - 1v1 fight club
    In the first edition of the tourney, I ran a 1v1 competition on the current 1v1 ladder map pool for 200 - 700 rated players (not including the mapgen though). Rounds were played with each AI playing every other AI on every map in the pool each round. At the end of each round, the lowest scoring AI was eliminated from the next round, leading to 5 rounds in total for the 6 entered AIs:

    • Adaptive AI
    • Sorian AI Adaptive
    • Dilli AI - last update: 2018/06/10
    • DilliDalli AI - last update: 2021/07/27
    • Swarm AI - last update: 2021/11/26
    • M27AI - last update: 2021/11/19

    The scores for each AI in each round were 0 points per loss, 1 point per draw, and 3 points per win.

    AI Round 1 Round 2 Round 3 Round 4
    DilliDalli 112 80 50 31
    M27 79 67 43 25
    Dilli 59 38 30 15
    Swarm 50 32 17 X
    Sorian Adaptive 35 15 X X
    Adaptive 13 X X X

    In the final round, DilliDalli and M27 won 4/8 games each, leading to a tied competition!

    • DilliDalli won on Eye of the Storm, Floralis, Forbidden Pass, and Open Palms.
    • M27 won on Auburn Canyon, The Ganges Chasma, Pelagial, and Theta Passage.

    To give an insight on where each AI was picking up points, I also worked out the win rate for each AI on each map (number of games varies per AI since they were eliminated at different times):

    AI Auburn Canyon Eye of the Storm Floralis Forbidden Pass The Ganges Chasma Open Palms Pelagial Theta Passage
    Adaptive 20% 0% 20% 0% 0% 0% 0% 0%
    Dilli 43% 43% 50% 64% 43% 36% 29% 7%
    DilliDalli 67% 73% 100% 80% 67% 100% 67% 67%
    M27 100% 7% 60% 40% 93% 60% 20% 100%
    Sorian Adaptive 0% 22% 11% 0% 22% 11% 67% 33%
    Swarm 25% 17% 17% 33% 25% 33% 42% 50%
    posted in AI development •
    Weekly AI Tourney Series

    As of last week, I am running a weekly AI only tournament to put new and old AIs through their paces and see who comes out on top!

    Each week I'm planning to change up the format; I'll be doing stuff like 1v1s, teamgames, FFAs, modded games, and no rush timers - let me know what you'd all like to see the AIs play 🙂

    I'll be posting the results of each week in this thread, so stay tuned for updates.

    PS: If anybody wants to enter an AI just let me know, all AIs are welcome!

    posted in AI development •
    RE: Text data output from mods.

    I have a tool you can install and run relatively easily: https://github.com/HardlySoftly/FAF-AI-Autorun

    It is designed for auto-running AI vs AI games in large batches, so isn't exactly your use case, but it does include a variety of things that will overlap with what you're interested in:

    • Configurable automated launching of skirmish games from the command line.
    • Support for batches of games run in parallel (it can restrict each FA instance to a different core for best performance).
    • Outputting data to logs from each game, then collating those together for overall results.

    Hopefully this is helpful 🙂

    posted in Modding & Tools •