There's nothing in this video's theory that I disagree with, but I think it overstates how good recommendation analysis is currently.
The University of Minnesota runs an experimental movie recommendation site, Movielens, which represents a decent approximation of state of the art. I've been using it for years. It's better than just using imdb, which is biased towards what immature internet users like. And, it's still nearly useless. I often find the movies it recommends unwatchable. If I highly rate a slow movie like The Killing of a Sacred Deer, it immediately thinks I like anything unwatchably boring like I Don't Feel at Home in This World Anymore.
At the other extreme, I think it's clear that someone who deeply understands art and culture and has wide-ranging appreciation for various tastes would be able to interview another person about their tastes and recommend movies or books or whatever that they would like. So, like, if you had my wife rate all the movies she's ever seen, and then she watches several new ones that I watch separately, I think I could outperform any existing recommendation algorithm on mean squared error. (it might pick up on subtle systematic trends like "she tends to rate horror 1.3 stars less than their average" that I don't have the bandwidth to grok, but I think I would do a much better job avoiding gross misjudgments of cultural context. I know she'll like Girls' Trip way better than Pitch Perfect, partially because I'm able to bring in tons of other contextual information about how the movie's qualities jibe with or conflict with her interest that the recommender will only have indirect access to.)
So let's agree that this sort of "ambient, ongoing, intrepid recommendation" logic exists somewhere between the capability of machine learning today (let's call that t=0), and the capability of an artificial superintelligence ("ASI") with all the capabilities of a brilliant human plus fast execution and the ability to process jobs in parallel (let's call that t=1).
The question is, what is the value of t necessary to make the sort of vision in "The Selfish Ledger" practically helpful?
I think the popular consensus is something like t=0.2 . But I suspect it's more like t=0.7 .
I do appreciate the concept of a profile which has proved itself so valuable that you prefer to add to it rather than go without a profile (or start over). But the machine learning aspects of this seem wildly overblown. Google has astonishing capabilities to perform machine learning using my information now. How is it being used to help me? By recognizing spam, certainly, but I don't think my own spam categorizations make much of a difference. By recognizing the people and things in my photos and making it easier to search for them--cool, but hardly life-changing, and still very primitive. By recognizing my speech on my android phone when I'm transcribing--but it seems that there is very little learning my specific voice, or (I suspect) tracking my corrections after the fact; there's no capability to specifically train it on my voice, for one thing.
The quality of the user experience across Google products has everything to do with product design and engineering to support that design, and next to nothing to do with machine learning.