Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sounds like reasonable changes.

Generally speaking, I think evidence tampering is not a new problem, and even though it's easy in some cases, I don't think it's _that_ widespread. Just like it's possible to lie on the stand, but people usually think twice before they do it, because _if_ they are found to have lied, they're in trouble.

My main concern is rather that legit evidence can now easily be called into question. That seems to me like a much higher risk than fake evidence, considering the overall dynamics.

But ultimately: Humanity has coped without photo, audio or video evidence for most of its existence. I suppose it will cope again.



I agree. The article didn't touch on this aspect, but we're now at the point where even authentic recordings could be plausibly denied and claimed to be fake. So the entire usage of recordings as evidence will suffer a hit. We may essentially be knocked back to an 18th century level of reliance on eyewitness testimony. One wonders what the consequences for justice will be.


I wouldn't say we'd be quite back to pre-photo evidence days. I feel a lot if not most of the value in a video/audio recording is not just that the medium has traditionally been difficult to edit, but that it's attesting to a lot of details with high specificity. There's a lot to potentially get caught out on with not a lot of wiggle room when inconsistencies are spotted (compared to recalling from memory). Document scans and static images are still useful despite having long been trivial to edit, for instance.


I think forensic analysis of photographs would reveal pretty quickly if they’re AI anyway.

AI images of real life still don’t pass intense scrutiny.


And we have been able to edit photos nearly as long as we have been able to make them. Yet photo evidence continues being useful


I won't deny that genAI is still a long way from passing the strictest scrutiny, but I don't like the above argument. Ease of use is not a trivial matter, and extreme advances in accessibility can cause a phase-change. Photo evidence continues to be useful because editing photos is below the critical threshold for becoming a problem. If it becomes easy, low-skill, and reliable, at that point we will cross into the realm of Reasonable Doubt. It's the difference between possible, and practical, and pervasive.


Right... forensics, the famously reliable discipline.


You can entirely remove the word “forensics” and my statement would be the same…


It's just another thing you'll have to pay for as the defendant.


There's already a process for this, its called chain of custody. If you cant prove the evidence has a solid chain of custody then it was potentially tampered with and isn't reliable.


The usual chain of custody goes something like: The store has a video surveillance system which the police collect the footage from, so the chain of custody goes through the store and the police which implies that nobody other than those two have tampered with it.

But then you have an inside job where the perpetrators work for the store and have doctored the footage before the police come to pick it up, or a corrupt cop who wants to convict someone without proving their case or is accepting bribes to convict the wrong person and now has easy access to forgeries. Chain of custody can't help you in either of these cases, and both of those things definitely happen in real life, so how do you determine when they happen or don't?


Surely chain of custody applies if the accused has access to the evidence? Perhaps I’m missing your point or I’m overly optimistic about the legal system.


Suppose the store manager is having a dispute with a kid who keeps skateboarding in the parking lot, so the store manager decides to commit insurance fraud by robbing the store herself and then submits forged video of the kid doing it to the police.

The store manager is in the chain of custody but isn't a suspect, the accused is the kid. The kid doesn't even know who actually committed the crime. How is the kid supposed to prove this?


In this case, chain of custody needs to extend to the capture device itself, and to any software that exists in the supply chain for the video content.

There are some experimental specifications that exist to provide attestation as to the authenticity of media. But most of what I’ve seen so far is a “perjury based” approach that just requires a human to say that something is authentic.


Chain of custody isn't real as long as the judiciary gives the government a 'good faith' pass when chain of custody isn't maintained/documentable in court. Go into Lexus Nexus and look up 'good faith' related to 'chain of custody'. Any 'protections' that can be waived away at the judges whim when the standard isn't met by the government are not actually real but pure theater to lend legitimacy to the American judicial system that it doesn't deserve.


> In this case, chain of custody needs to extend to the capture device itself, and to any software that exists in the supply chain for the video content.

There are two major problems with this.

First, is all footage from existing surveillance systems going to be thrown out because it doesn't use this technology? Answer: No, because it would be impractical. But then nobody cares to adopt the technology because using it isn't required. How's that IPv6 transition going?

Second, that sort of thing doesn't actually work anyway. Surveillance cameras are made by the lowest bidder. Their security record is appalling. They're going to publish their private keys on github and expose buffer overflows to the public internet and leave a telnet server running on the camera that gives you a root shell with no password. Does it sound like hyperbole? Those are all things that have actually happened.

There is only one known way to prevent this from happening: Do not allow the hardware vendor to write the software. Any of the software. Instead, demand hardware documentation so that the firmware can be written by open source software people instead of lowest bidder hardware companies. This is incompatible with using the hardware vendor as the root of trust, which is a natural consequence because the hardware vendors are completely untrustworthy.

But let's suppose we find some way to do it. We'll pass a law imposing a $100 fine on any company that has a security vulnerability. Then there will never be a security vulnerability again because security vulnerabilities will be illegal; I'm assured this is how laws work. At that point the forger takes the camera and points it a a high resolution playback of the forgery, and the camera records and signs the forgery.

I kind of wish people would stop suggesting this. It's completely useless but it creates the false impression that it can be solved this way and then people stop trying to find a real solution.


Yep, "chain of custody." Came here hoping to see that concept discussed since it's how the system already deals with cases of potential evidence tampering. If the evidence is of material importance and there's no sufficiently credible chain of custody, then its validity can be questioned. The concept started around purely physical evidence but applies to image, audio and video. The good thing about the ubiquity of deepfake memes on social media is that it familiarizes judges and juries with how easy it now is to create plausible fake media.


Chain of custody only covers from when the evidence came into the hands of the police; the real issue here is original provenance, which chain of custody doesn't solve.

Evidence of provenance is already important, to be sure, but the the ability to have some degree of validation of the contents has itself provided some evidence of provenance; lose that and there is a real challenge.


This is unironically a usecase for blockchain.


Who needs a whole blockchain? Just basic public-key cryptography would do the job.

Imagine if you will, that the NVR (recording system) has a unique private key flashed in during manufacturing, with the corresponding public key printed on it's nameplate. The device can sign a video clip and its related meta-data before exporting. Now, any decent hacker could see possible holes in this system, but it could be made tamper-resistant enough that any non-expert wouldn't be able to fabricate a signed video. Then the evidence becomes the signed video and the NVR's serial number and public key. Not perfect, but probably good enough.


Unfortunately consumer devices often have weak cryptography built into them. The one properly implemented are just out of reach for average consumer.


More than just a blockchain, you need a decentralized set of oracles to mutually corroborate information. https://polykey.com/blog/ai-detection-versus-cryptographic-p...


This is such BS. The government is ALWAYS deferred to when the chain of custody is broken because 'good faith' is applied. As long as 'good faith' is rountely dispensed 'chain of custody' is nothing but propaganda for the justice system not an actual tool used for justice.

As long as chain of custody ca be discarded because 'good faith' whenever it becomes inconvenient it is not a real thing.


I can easily imagine a future where video evidence is only acceptable in the form of chemically developed analog film, at resolutions that are prohibitively expensive to model, and audio recordings of any kind are not admissible as evidence at all. Signatures on paper, faxes, etc are, of course, inadmissible, too.


The point of a signature on paper is that (at least in my country) you can summon somebody in court and ask "is this your signature"? If they say it is, it is, even if it does not look much like any of their other signatures. Then there might be a suspect of false testimony or being blackmailed etc but that's not always the most important issue in a case. E.g.: if it is a contract and all the signers state that they agree with its terms, then there is no much else to discuss.


> we're now at the point where even authentic recordings could be plausibly denied and claimed to be fake.

We've been there for at least two years.

https://arstechnica.com/tech-policy/2023/04/judge-slams-tesl...


Digital forensics will continue to be an in-demand skill!


Have you ever heard some of the wiretap or hidden microphone recordings used to convict mafia bosses back in the 1980s? It was so bad I can't believe it was accepted. It could easily have been faked. The only thing that made it work was the sworn statements of authenticity from the people who did the recording, and the chain of custody thereafter.


I know a lawyer who was convicted on one of those. The detectives had the mic + transmitter taped to his groin so it would get through a pat down. Then the undercover just spoke both sides of the conversation (the defendant's side was whispered and muffled). Someone testified it sounded like the defendant. Conviction was overturned on appeal on a simpler issue unrelated to the evidence.

In fact, I remember a drug dealer I was helping with his defense. Hidden mic on undercover was taped under his armpit. He arrived to defendant's hotel room for a deal and defendant made him undress. The recording is hilarious because defendant is like "You fucking snitch, what's that under your armpit?" and the undercover says "It's my .. er .. MP3 player?" LOL


So what happened to him?


I refer you to the precedent of Snitches v. Stitches


Later affirmed by Snitches v. Ditches


The first one got a get-out-of-jail-free card. The second one got away because the drug dealer didn't want to do anything to him while he had a mic + transmitter attached. If anything happened to him later, I wasn't privy to that..!


>But ultimately: Humanity has coped without photo, audio or video evidence for most of its existence.

which allowed to burn witches based on testimony of citizens in good standing.

And that leads us to using Neuralink and similar tech and the next gen lie detectors (like say the defendant's fMRI (most probably interpreted by AI too:)) to look into the brain and extract confession. No need for evidence, deposition and all that expensive time consuming stuff standing in the way of truth especially given that it can't be trusted anymore in the face of AI capabilities.


I kind of want to have an LLM which takes absolutely any criticism of AI or news of it doing something bad and then generates a plausible HN comment that basically goes "I don't think it's a new problem, X has always been possible, there's nothing really new"... because that comment always appears like clockwork beneath it :)


Strangely, I'm not even offended by the notion that an AI can replace this work I apparently do :D Though my thought here was just pure optimism in the face of something bad, not an attempt to frame a bad side effect of something good as not a big deal.

When it comes to generative AI, I personally don't see a lot of good applications, but a plethora of bad ones. The only solution I could imagine would be regulation to the degree that using or distributing models with certain capabilities is just illegal. Judging from the war on file sharing a few decades ago, probably very difficult to enforce, even if it is perhaps still worth doing.

But I don't see any governments line up to do it. Given that, this particular (semi) new development that generative AI is effective for evidence tampering, I think we'll manage to deal with it.


The nuance that's usually missing is that new technology invariably enables the same things but more of them and faster.


Everyone misses this all the time.

Most problems can be overlooked because they’re not that prolific. There’s a chain of events that really hinges on that assumption.

For example, piracy is really not a problem. In fact I’m pro-piracy most of the time. But if everyone could pirate with no thought and no down time, I don’t think our economy will survive. Just because something is okay in small doses doesn’t mean it scales well. We understand this intuitively with substances, but not with technology like LLMs.


Do you think the people who keep commenting "X was always a problem" are really unaware of this? Or they are just trying to dampen any alarm on purpose?


I think they’re genuinely unaware of this because many, maybe most, people who analyze risk casually don’t think about ease or probability. Because it’s easier that way, and typically allows them to maintain some beliefs about personal freedoms.


A snowflake isn't an avalanche, but if you let them build up, there is potential.


I mean, you are already getting the result for free, so why do you need an LLM for it?

In any case, you can probably do this with a fairly simple prompt with any LLM.


That's a great point. The ability to undermine real evidence by claiming it's AI-generated could be just as (if not more) damaging than fake evidence itself


Well, and to generate fake evidence to lead cops off the trail for the longest period of time possible. Evidence could be said to contain a half life. The longer (real) evidence is not gathered the more it can decay in one way or another. For example security footage gets overwritten. Actual witness memory gets foggy. Physical items get thrown away.

If the cops spend the first month going after some poor Mark that had nothing to do with it, by the time they realize they've been had it will likely be much more difficult to catch the actual perp.

Also, that's saying nothing about the court of public opinion.


And given how quickly narratives form online, even if the truth eventually comes out, public opinion might not catch up


> I don't think it's _that_ widespread. Just like it's possible to lie on the stand, but people usually think twice before they do it, because _if_ they are found to have lied, they're in trouble.

I don't have data, but I suspect a LOT of people lie on the stand. This is mostly based on what I see on reality court shows and true crime type shows, so admittedly not a great sample, but I figure once something gets to trial things are going to be contentious.


At this stage it's more a risk for people who have their likeness out in the public domain where it can be copied. But that's just about everyone these days.


> But ultimately: Humanity has coped without photo, audio or video evidence for most of its existence. I suppose it will cope again.

Same argument for electricity, the internet, sanitation, democracy... doesn't seem like a great test for stuff that we just didn't have it before and survived.


Well, I'm not arguing "good riddance". I just optimistically think we'll manage. I wouldn't want to miss any of the things you listed, actually. I'd also prefer evidence tampering to be impossible or at least very difficult. But that's not a call I can make. All I can decide about things out of my control, is how I think about it. And here I'm carefully optimistic.


But your argument for why we can live without audio or video evidence was that we lived like that before. All I'm saying is that's not a great argument. You can probably come up with other arguments and they might be perfectly valid.


Fair point. I didn't mean it too sound that dismissive. I'd rather not live without electricity, but if something was happening that eliminated electricity, I'd probably say "we did it before, we can do it again", too. And yeah, I'd probably say it in that case, not post to HN, for technical reasons.


I mean, technically, a lot of murders did get away with their crimes in the days before video evidence, so there is that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: