Good work; but .
Any modelling derived outcomes
(regardless of however many variables) relies on at least some degree of “ predictability “ . I.e. some semblance of logic .
Trust me SSR ; we support a club that doesn’t do the P-word . For some reason . It’s who we are …..
In other words;
If you value your PC : just run the potential final league standings without including fixtures involving our club; it’ll end up melting into a blob of plastic and molten copper otherwise.
Exhibit A ) lose at home to ITFC ; next game do the deed away against CITEH .
COYS !
Actually up until the Spurs-Ipswich match, our predictability in terms of goals scored and goals conceded in each match was about the same as the rest of the league. We had averaged a deviation +/- 0.97 from expected goals scored (highest of top 10 teams Chelsea +/-1.05, lowest Forest at +/-0.68), and +/- 0.59 in terms of goals conceded (highest of top 10 teams Brighton +/-0.96, lowest Liverpool +/- 0.50).
The problem was that the variance of 1 or 2 goals from the expected scorelines had cost us a lot of points in 3 games in particular:
Match | Model Predicted Score | Actual Score | Points lost vs most likely result |
Leicester(A) | 1.2 - 2.6 | 1 - 1 | 2 |
Brighton(A) | 2.1 - 1.8 | 3 - 2 | 1 |
Palace(A) | 0.9 - 1.7 | 1 - 0 | 3 |
The Opta "expected points" table reflects that too.
But then the Ipswich game happened, where the model predicted score was a 4.6 - 1.1 win and it ended up a 1 - 2 loss and that's where my PC starts to melt into a blob of plastic, silicon and melted copper. My model has significantly de-rated Spurs as a result, and will take several weeks of good results to restore its former faith.