TMM Rating Allowance Needs to Use Ladder 1v1 Matching (or close to it)

Was this difference also not somewhat deliberate for lower rated players to learn faster?

Last weekend we played a game as 2x595 vs. 0 & 1007. At some point we sniped the 1007, but ultimately they won. The 0 rated made a point to thank the 1007 rated for his guidance. - (replay https://replay.faforever.com/14167584 for those interested)
Watching the replay from their point of view, I do see the frustration @George_W_Crush mentioned.

Still obviously, go for best matches if there are 20 people in queue...

It has been my theory for awhile that the game quality calculation is innaccurate for teams wherein the highest and lowest rated players have higher differences in rating. IE as the OP said, a "balanced" game with an 1800 and a 1000 vs two 1400s will see the latter team win the clear majority of the time.

Sadly, I dont have the technical knowhow to verify this theory statistically

I am currently writing a new matching algorithm to make a 4v4 queue possible. From my understanding trueskill doesn't factor in rating disparity between players when calculating the game quality so in order to get onl y games with similarly rated players we need to introduce our own quality metric. I will now explain my first draft for this so you can discuss if you think that is a good formula and make suggestions to improve it.
Currently I calculate quality = uniformity * fairness + newbie bonus + time bonus
Uniformity goes from 0 to 1 and is 1 if all players in the game have the same rating and is zero if the standard deviation of the ratings if all players is greater than 300
Fairness works the same but looks at difference in total team skills, so it is 0 if the rating difference between the teams is higher than 400.
The newbie bonus is to faster match new players and is a flat bonus if a new player is in one of the teams.
The time bonus increases with every time you were not successfully matched.
The main thing I want to know is: should we use the standard deviation for uniformity or just the highest rating difference between players? The former would allow higher skill gaps in larger teams and smaller ones in 2v2

Do you use players mu - 3 * sigma or just mu for players rating?
I think that it would be best to use standard deviation for uniformity while setting a max difference in highest vs lowest rating for the Uniformity to even be considered or even saying that the quality is too low if the Highest - Lowest ratings is greater then X. The Highest - Lowest ceiling should also increase with each unsuccessful match up to some cap.
For example (considering 4vs4 here) you start with a ceiling of 400 and raise it by let's say 50 per unsuccessful match up to 600.

I use mu - 3 * sigma for all calculations.
I failed to mention that I discard all possible games with a quality below a certain threshold. So in a way I am already doing what you proposed.

What is an acceptable value for the difference in team strengths in your opinion?

I know that you discard games below certain quality and the quality of the same team matchup rises with each unsuccessful queue pop but what I am also suggesting is adding the extra element that prevents games that are decent enough using standard deviation but are not so great in reality because of the highest vs lowest rating. For example having players with ratings : 2000, 2x 1900, 1800, 2x 1400, 2x 1300 has a standard deviation of 282 which ain't great but they will still probably be matched while usually making for an unbalanced game.
TBH I thought it would be much worse and looking at it now after some trials with the calculator it's not bad but I think that adding a hard ceiling of I would say 500 - 600 will remove the troubling edge cases while not cutting down on the matches too much.
Concerning the value (total team rating right?) I would say that the difference should be no bigger then 100 initially raising to I would say 300 by 50 with each unsuccessful queue pop.

Another thing popped up: we have no restrictions on premade teams. I noticed that when a 1500 queues up with a 500 it would drag down the quality score. Now we could say that we don't want to "punish" people that are friends but have a wide skill gap. On the other hand their individual ratings are probably still accurate and the rest of the players would still suffer from having such a skill variance in their games.
(I assume accurate ratings because newbies would get a matching bonus that alleviates this problem)
What do you guys think about this?

Surely a simple and basic thing to implement - not sure from a technical pov but logically for sure - would be to just offer a little more information than "4 people in queue".

A lot of issues for people are not finding a game when they see 8 people in the queue, or they find a game and realise it is stupidly balanced and the people they are with are basically playing a different game and then the salt flows through their veins.

Why is it not possible to just offer more information on who the "4 people in queue" are regarding their rating. It is already fairly well known that there is a rating bracket system. <500, <1000, 1000-1500 etc etc, and these rating brackets/groups have access to their own maps etc.

So why cant their be a simple tally system under the "4 people in queue" that just says oh btw these 4 people are in the <500 bracket. then someone who is 1600 knows they probably shouldn't queue unless they want to wait 25 minutes to maybe get a game.

I don't know alot about how difficult this is, but as an idea it just seems far to simple and helpful to neglect. Just give people that tiny bit more information of what "8 people in queue" means, and i cant think of a simpler way than just a tally mark system for each Bracket......

Do you know the Muffin Man?

This would also be inaccurate for edge cases where you are at the cusp or entering/leaving a bracket, new players that get artificial mu values set, and for premade teams which could have something like a 0 rated guy play with a 2000 rated guy.

What pre-made teams will have a 0 rated and a 2k rated player? There isn't even any 2k rated players now with the reset, and when there was, i think only 2 people were arguably active out of the like 4/5 people who got to 2K tmm?

I don't see how people being close to the rating is an issue? You either round up their score like global does when showing someones rating in lobby and use that.

Im sure within the idea there is some middle ground that could be reached which at least would give more information as to whether you should even bother queuing. People have said there was something similar in python for ladder at some point?

Do you know the Muffin Man?

I think to give this discussion more structure you first of all need to define which problem you want to solve and why your solution would solve it.
You went from "people don't understand why they don't get a game" to "people want to know whether they should even bother queueing"
Btw, do you not just queue when you want to play? Why is the distribution of players relevant for this?

Banning large differences can be a major problem. I play 2v2 with a friend without a 1v1 rating or really "undeveloped" one. If 3v3 opens up, we might both be 1000 rated 2v2 and invite a new friend, who knows?

I think blocking out teams based on 1 team member is a problem, if people are disqualified for 50% of matches then TMM will probably lose those players.

There will be no hard block, it will just take them longer to find a game because they produce lower game quality. It will take some time to compensate this with the "time waited" bonus

@BlackYps And what with people with undeveloped 1v1 rating? - Will they need to wait 3 rounds to be matched, just because they play 1v1 very rarely but not never?

They will get a newbie bonus to help them get matched

Okay, just so it is clear: what is the initial rating range allowed between two people within the first search cycle, second, third, etc?

And what is the quality of match allowance for first, second, third?

Well, the concrete values are what I am trying to get from this thread. 4head

The actual code is this:

newbie_bonus = 0
time_bonus = 0
ratings = []
for team in match:
    time_bonus += team.failed_matching_attempts * config.TIME_BONUS_WEIGHT
    if not team.has_top_player():
        newbie_bonus += team.has_newbie() * config.NEWBIE_BONUS_WEIGHT
    for mean, dev in team.raw_ratings:
        rating = mean - 3 * dev
        ratings.append(rating)

rating_imbalance = abs(match[0].cumulated_rating - match[1].cumulated_rating)
fairness = max((config.MAXIMUM_RATING_IMBALANCE - rating_imbalance) / config.MAXIMUM_RATING_IMBALANCE, 0)
deviation = stats.pstdev(ratings)
uniformity = max((config.MAXIMUM_RATING_DEVIATION - deviation) / config.MAXIMUM_RATING_DEVIATION, 0)

quality = fairness * uniformity + newbie_bonus + time_bonus

The preliminary config values are:

self.NEWBIE_MIN_GAMES = 10
self.TOP_PLAYER_MIN_RATING = 1600
self.MINIMUM_GAME_QUALITY = 0.5
self.MAXIMUM_RATING_IMBALANCE = 600
self.MAXIMUM_RATING_DEVIATION = 300
self.TIME_BONUS_WEIGHT = 0.1
self.NEWBIE_BONUS_WEIGHT = 0.2

Explanation:

@BlackYps said in TMM Rating Allowance Needs to Use Ladder 1v1 Matching (or close to it):

I am currently writing a new matching algorithm to make a 4v4 queue possible. From my understanding trueskill doesn't factor in rating disparity between players when calculating the game quality so in order to get onl y games with similarly rated players we need to introduce our own quality metric. I will now explain my first draft for this so you can discuss if you think that is a good formula and make suggestions to improve it.
Currently I calculate quality = uniformity * fairness + newbie bonus + time bonus
Uniformity goes from 0 to 1 and is 1 if all players in the game have the same rating and is zero if the standard deviation of the ratings if all players is greater than 300
Fairness works the same but looks at difference in total team skills, so it is 0 if the rating difference between the teams is higher than 600.
The newbie bonus is to faster match new players and is a flat bonus if a new player is in one of the teams.
The time bonus increases with every time you were not successfully matched.

Additional explanation:
The deviation is roughly one third of the biggest rating difference if we assume a somewhat even rating distribution in the team.
If both uniformity and fairness are about 2/3 we barely reach the quality threshold of 0.5. That means that a game that has 200 team rating difference and a 300 difference between individual players is the borderline case of what is acceptable. One of these metrics can be worse if the other one is better.

This post is deleted!

Could you randomly generate some imaginary players with rating values/number of games, put them into random teams, and then calculate the quality? Without concrete examples its rally hard to judge such an algorithm.

Edit: And throw away examples where fairness is too far off.

I don't have a lot of examples because I have just started making some, but here are a few for you:
(Keep in mind that a game quality of 0.5 is the cutoff here for a game to be considered)

A "Search" is a party of players that is searching for a game. the "pX" are the player names, so you can see how many players there are in the search. The number at the end is the average rating of that search party. The game quality uses the formula that I explained in my previous post.

team a: [Search(['p12'], 842), Search(['p15'], 738), Search(['p1', 'p2'], 781)] cumulated rating: 3142   average rating: 785.5
team b: [Search(['p11'], 745), Search(['p5', 'p6'], 788), Search(['p16'], 816)] cumulated rating: 3137   average rating: 784.25
bonuses: 0.0 rating disparity: 5 -> fairness: 0.9916666666666667 deviation: 44.917806881013234 -> uniformity: 0.8502739770632892 -> game quality: 0.8431883605877618

team a: [Search(['p5', 'p6'], 788), Search(['p16'], 816), Search(['p13'], 971)] cumulated rating: 3363   average rating: 840.75
team b: [Search(['p1', 'p2'], 781), Search(['p12'], 842), Search(['p17'], 951)] cumulated rating: 3355   average rating: 838.75
bonuses: 0.0 rating disparity: 8 -> fairness: 0.9866666666666667 deviation: 79.45399612354309 -> uniformity: 0.7351533462548563 -> game quality: 0.7253513016381249

team a: [Search(['p3', 'p4'], 1004.5), Search(['p17'], 951), Search(['p16'], 816)] cumulated rating: 3776   average rating: 944
team b: [Search(['p13'], 971), Search(['p12'], 842), Search(['p5', 'p6'], 788)] cumulated rating: 3389   average rating: 847.25
bonuses: 0.0 rating disparity: 387 -> fairness: 0.355 deviation: 92.79134859996378 -> uniformity: 0.6906955046667874 -> game quality: 0.24519690415670953

team a: [Search(['p7', 'p8', 'p9'], 1011.3333333333334), Search(['p12'], 842)] cumulated rating: 3876   average rating: 969
team b: [Search(['p13'], 971), Search(['p3', 'p4'], 1004.5), Search(['p17'], 951)] cumulated rating: 3931   average rating: 982.75
bonuses: 0.0 rating disparity: 55 -> fairness: 0.9083333333333333 deviation: 67.96035149261664 -> uniformity: 0.7734654950246113 -> game quality: 0.7025644913140219

team a: [Search(['p7', 'p8', 'p9'], 1011.3333333333334), Search(['p11'], 745)] cumulated rating: 3779   average rating: 944.75
team b: [Search(['p10'], 1047), Search(['p15'], 738), Search(['p14'], 1032), Search(['p13'], 971)] cumulated rating: 3788   average rating: 947
bonuses: 0.0 rating disparity: 9 -> fairness: 0.985 deviation: 125.21026066181636 -> uniformity: 0.5826324644606121 -> game quality: 0.5738929774937029

team a: [Search(['p7', 'p8', 'p9'], 925.3333333333334), Search(['p15'], 1328)] cumulated rating: 4104   average rating: 1026
team b: [Search(['p13'], 1115), Search(['p16'], 1231), Search(['p3', 'p4'], 998)] cumulated rating: 4342   average rating: 1085.5
bonuses: 0.0 rating disparity: 238 -> fairness: 0.6033333333333334 deviation: 152.77004778424336 -> uniformity: 0.4907665073858555 -> game quality: 0.2960957927894662

team a: [Search(['p7', 'p8', 'p9'], 925.3333333333334), Search(['p13'], 1115)] cumulated rating: 3891   average rating: 972.75
team b: [Search(['p3', 'p4'], 998), Search(['p5', 'p6'], 918.5)] cumulated rating: 3833   average rating: 958.25
bonuses: 0.0 rating disparity: 58 -> fairness: 0.9033333333333333 deviation: 84.13976467758869 -> uniformity: 0.719534117741371 -> game quality: 0.6499791530263718

team a: [Search(['p7', 'p8', 'p9'], 925.3333333333334), Search(['p12'], 846)] cumulated rating: 3622   average rating: 905.5
team b: [Search(['p5', 'p6'], 918.5), Search(['p1', 'p2'], 810.5)] cumulated rating: 3458   average rating: 864.5
bonuses: 0.0 rating disparity: 164 -> fairness: 0.7266666666666667 deviation: 79.85612061701971 -> uniformity: 0.733812931276601 -> game quality: 0.5332373967276633