One big problem is that the verification is trying to estimate your age instead of looking up who is the actual person and then checking what the age is of that person. If the lookup returns that the face is that of a video game character it should reject as opposed to trying to estimate the age of that character.
What if we...now hear me out....what if we didn't try to shoehorn a stupid and unworkable technological solution into this problem space and just...made parents responsible for their kids?
Nono too radical, parents dont have time, they need it to scroll some shitty social media cash grab to feel themselves even more shitty about their lives.
... and we would like to call our generation 'smart'. While knowing deep inside very well what a failure as a parent many of our generation are. The proof for/against are our kids right in front of our eyes and there is no escaping from this basic truth, thats why its so crushing.
Sorry gotta go, need to check some shitty sites who spy on me and try to push in vain on me some primitive ads.
Says a lot about the state of society when parenting is outsourced to technology, so that the parents can be further enslaved (because almost no one chooses to work two jobs).
Most of a "living wage" is from the cost of living. We make living space artificially scarce and then your rent is high but so is the rent on the small businesses that employ people. The restaurant can't pay the waitress more when their own costs have gone up, and the money is going to the landlords rather than the employers.
Likewise, when some megacorps capture the government and monopolize a market, the costs go up on both individuals and all the employers in other markets who are now paying monopoly rents with the money they could have otherwise used to hire more people (bidding up wages) or lower the prices workers pay when they buy their products.
Just asking them to pay more doesn't work when the party you want to pay more isn't the party which is extracting the money, and higher costs are just as much of a problem as lower wages.
> If you end it with "and make a good easy to use technical solution instead" then you found my stance.
That assumes a good easy to use technical solution is possible. What if classifying user-generated content as safe for kids is enormously subjective, and the labor required to accurately classify it even given a hypothetical objective standard would cost more than users are willing to pay to have it done?
(you can sort of do this in countries with national ID schemes if you don't care about foreigners; for example, various people have found this in China where random things are gated behind having a WeChat account which requires a Chinese ID. You can't do this in the US or UK, which are big pushers of the ""age verification"" scheme)
You don't need an Id. For example, you can crawl the internet for selfies and then try and tie that face with the person it belongs to. With enough datasets you can start to put together a database of relevant people enough that it's okay to do deeper validation for the people you did not collect a face for.
This Mozilla report is low quality and treats legal boilerplate as proof of them spying. It says a car is snooping on you via its microphone even if that microphone is purely used for support Bluetooth calls.
What if the competitor's architecture is able to produce tokens twice as fast. What if the competitor secures a 1 month exclusivity deal on Nvidia's next generation?
People regularly give businesses >50% cut to get cash immediately and this isn't even counting resellers who low ball people who don't know what they have.
The entirety of school should eventually be replaced with just this one class. AI is able to teach people anything they may want or need to know and it can design effective ways for people to study. Being able to use, interpret, and work together with AI is going to be one of the most important skills of the 21st century.
You know why most kids don't do this already, because they don't know what they don't know. Telling a 2nd grader to go learn anything they want is not going to have the result you apparently think it will.
This level of naivety is characteristic of certain SV types where wishful thinking is the order of the day. We're already living through the disastrous effects of the "social media" revolution and this is going to be much more of the same, with even worse negative effects on society.
Just imagine what this will do to critical thinking, interpersonal relationships and family dynamics in a country where illiteracy is rapidly climbing. I don't think it's a stretch to write that if the unrestrained capitulation in terms of societal costs towards big tech continues, we're setting ourselves up for {generational, class-based} conflict that will rip our country to pieces.
Maybe so. Still, learning how to tell when the AI is blowing smoke is going to be an important skill, and I'm not sure that AIs are going to be great at teaching that to you.
And learning when other people (AI salespeople, say) are blowing smoke is also an important skill. Again, I'm not sure that AIs are great at teaching that.
I've had PAM break due to distro's ridiculous policy of updating the system in place allowing for invalid combinations of files to exist. I've had Linux distros break the booting process countless times.
Except that if you tried one-shotting your ticket twenty times at different hours of the day and different days of the week, you would have enough changes to make benchmarks even if you used the same model every time. Much moreso if you fiddled with the thinking or changed the prompt.
Because non-deterministic, because of constant updates and changes, and because the models are throttled according to number of users, releases, et al.
You never get "the same" Steph Curry, he might be tired, annoyed by a fan, getting older... but if he and I were to throw 100 3-pointers, we could all correctly guess who will perform better.
But I use Codex and Claude daily (work and hobby respectively). And there are days where one or the other just seems to have gotten up on the wrong side of the bed. Or is just being lazy. Or is suddenly super-powered do everything including what i asked it not to. (To be fair, the same thing happens with myself. :/)
I am convinced that if I was bench-marking, I would be convinced these are different models on different days.
[This conviction may say more about me then about the model.]
That's also fair, Anthropic lobotomized their services a couple of times already. One week, you are in awe that the tools figure out everything, explain everything, consider everything, produce a clean fix... next week, they are completely useless.
reply