Hacker Newsnew | past | comments | ask | show | jobs | submit | cyberax's commentslogin

To be fair, Vec::set_len bug in Rust was in 2021. And even then it had to be annotated as `unsafe`. It was then deprecated and a linter check was added: https://github.com/rust-lang/rust-clippy/issues/7681

To be even fair-er, it wasn't actually memory unsafety, it was "just" unsoundness, there was a type, that IF you gave it an io reader implementation that was weird, that implementation could see uninit data, or expose uninit data elsewhere, but the only readers actually used were well behaved readers.

Vec::set_len is by no means deprecated. The lint you linked only covers a very specific unsound pattern using set_len.

Indeed, and it doesn't need to be deprecated, because it's an API explicitly designed to give you low-level control where you need it, and because it is appropriately defined as an `unsafe` function with documented safety invariants that must be manually upheld in order for usage to be memory-safe. The documentation also suggests several other (safe) functions that should be used instead when possible, and provides correct usage examples: https://doc.rust-lang.org/std/vec/struct.Vec.html#method.set... .

> and because it is appropriately defined as an `unsafe` function with documented safety invariants that must be manually upheld in order for usage to be memory-safe.

Didn't we learn from c, and the entire raison detre for rust, is that coders cannot be trusted to follow rules like this?

If coders could "(document) safety invariants that must be manually upheld in order for usage to be memory-safe." there's be no need for Rust.

This is the tautology underlying rust as I see it


No, this is mistaken. Rust provides `unsafe` functions for operations where memory-safety invariants must be manually upheld, and then forces callers to use `unsafe` blocks in order to call those functions, and then provides tooling for auditing unsafe blocks. Want to keep unsafe code out of your codebase? Then add `#![forbid(unsafe_code)]` to your crate root, and all unsafe code becomes a compiler error. Or you could add a check in your CI that prevents anyone from merging code that touches an unsafe block without sign-off from a senior maintainer. And/or you can add unit tests for any code that uses unsafe blocks and then run those tests under Miri, which will loudly complain if you perform any memory-unsafe operations. And you can add the `undocumented_unsafe_comment` lint in Clippy so that you'll never forget to document an unsafe block. Rust's culture is that unsafe blocks should be reserved for leaf nodes in the call graph, wrapped in safe APIs whose usage does not impose manual invariant management to downstream callers. Internally, those APIs represent a relatively miniscule portion of the codebase upon which all your verification can be focused. So you don't need to "trust" that coders will remember not to call unsafe functions needlessly, because the tooling is there to have your back.

> Want to keep unsafe code out of your codebase?

And how is this feasible for a systems language? Rust becomes too impotent for its main use case if you only use safe rust.

My original point still stands... Coders historically cannot be trusted to manually manage memory, unless they're rust coders apparently

> So you don't need to "trust" that coders will remember not to call unsafe functions needlessly, because the tooling is there to have your back.

By definition, it isn't possible for a tool to reason about unsafe code, otherwise the rust compiler would do it


> And how is this feasible for a systems language? Rust becomes too impotent for its main use case if you only use safe rust.

No, this is completely incorrect, and one of the most interesting and surprising results of Rust as an experiment in language design. An enormous proportion of Rust codebases need not have any unsafe code of their own whatsoever, and even those that do tend to have unsafe blocks in an extreme minority of files. Rust's hypothesis that unsafe code can be successfully encapsulated behind safe APIs suitable for the vast majority of uses has been experimentally proven in practice. Ironically, the average unsafe block in practice is a result of needing to call a function written in C, which is a symptom of not yet having enough alternatives written in Rust. I have worked on both freestanding OSes and embedded applications written in Rust--both domains where you would expect copious usage of unsafe--where I estimate less than 5% of the files actually contained unsafe blocks, meaning a 20x reduction in the effort needed to verify them (in Fred Brooks units, that's two silver bullets worth).

> Coders historically cannot be trusted to manually manage memory, unless they're rust coders apparently

Most Rust coders are not manually managing memory on the regular, or doing anything else that requires unsafe code. I'm not exaggerating when I say that it's entirely possible to have spent your entire career writing Rust code without ever having been forced to write an `unsafe` block, in the same way that Java programmers can go their entire career without using JNI.

> By definition, it isn't possible for a tool to reason about unsafe code, otherwise the rust compiler would do it

Of course it is. The Rust compiler reasons about unsafe code all the time. What it can't do is definitely prove many properties of unsafe code, which is why the compiler conservatively requires the annotation. But there are dozens of built-in warnings and Clippy lints that analyze unsafe blocks and attempts to flag issues early. In addition, Miri provides an interpreter in which to run unsafe code which provides dynamic rather than static analysis.


> No, this is completely incorrect,

Show me system level rust code that only uses safe then... You can't because its impossible. I doesn't matter that it's a minority of files (!), the simple fact is you can't program systems without using unsafe. Rewrite the c dependencies in rust and the amount of unsafe code increases massively

> Most Rust coders are not manually managing memory on the regular

Another sidestep. If coders in general cannot be trusted to manage memory, why can a rust coder be trusted all of a sudden?

> . But there are dozens of built-in warnings and Clippy lints that analyze unsafe blocks and attempts to flag issues early.

We already had that, it wasn't enough, hence..... rust, remember?


You are missing the forest for the trees here. The goal of that's unsafe isn't to prevent you from writing unsafe code. It's to prevent you from unsafe code by accident. That was always the goal. If you reread the comments through that lens I'm sure they'll make more sense.


Rust has never been about outright eliminating unsafe code, it's about encapsulating that unsafe code within a safe externally usable API.

When creating a dynamic sized array type, it's much simpler to reason about its invariants when you assume only its public methods have access to its size and length fields, rather than trust the user to remember to update those fields themselves.

The above is an analogy which is obviously fixed by using opaque accesor functions, but Rust takes it further by encapsulating raw pointer usage itself.

The whole ethos of unsafe Rust is that you encapsulate usages of things like raw pointers and mutable static variables in smaller, more easily verifiable modules rather than having everyone deal with them directly.


The issue with C is that every single use of a pointer needs to come with safety invariants (at its most basic: when you a pass a pointer to my function, do I. take ownership of your pointer or not?). You cannot legitimately expect people to be that alert 100% of the time.

Inversely, you can write whole applications in rust without ever touching `unsafe` directly, so that keyword by itself signals the need for attention (both to the programmer and the reviewer or auditor). An unsafe block without a safety comment next to it is a very easy red flag to catch.


>when you a pass a pointer to my function, do I take ownership of your pointer or not?

It's honestly frustrating how prevalent this is in C, and the docs don't even tell you this, and if you guess it does take ownership and make a copy for it and you were wrong, now you just leaked memory, or if you guessed the other way now you have the potential to double-free it, use after free, or have it mutated behind your back.


The specific use case the GNU maintainer listed followed this exact pattern.

The reason is that classical grids are mostly self-correcting. Rotating inertia can stabilize frequency and can produce or absorb reactive power.

"Reactive power" sounds fancy, but it just means that motors can create a drag. The power lines are giant capacitors, and capacitors have the lowest effective resistance when they are discharged. So the current is greatest when the voltage crosses the zero mark. Inductive (rotating) loads are the opposite, their effective resistance is greatest when the current starts to rise or fall. So this limits the initial inrush of the current.

But there's more! When you have a transformer and a long line, you can essentially get a boost converter. The voltage from a transformer travels through a low-resistance wire until it reaches the end, and because the line can be modeled as a series of capacitors, you essentially get a "charge pump" ( https://en.wikipedia.org/wiki/Charge_pump ). From the viewpoint of the generator you have one large capacitor, but from the viewpoint of a consumer in the middle of the line, you have two capacitors in series.

As a result, the voltage in power lines can _spike_ if there's not enough rotating load. This is called Ferranti effect, and in Spain it was the primary reason for the faults.

This is all fixable, but it requires investment and regulation. And Spain (and other countries) have been neglecting that, by incentivizing the cheapest possible generation.


I think the tyre problem is not really a thing. EVs use synchronized motors and traction control to avoid extra wear due to uneven torque during normal driving.

I can't remember if it was here or on reddit, but I read from a tyre shop / mechanic, that some EV users replace their tyres very often, because EV cars make it easy to drive very aggressive.

And others don't. We replaced our EV tires at about 80 000 km.

The increased weight due to the battery is the bigger issue for wear on tires. A lot of EVs weigh a good 500kg more than their ICE counterparts.

I think bigger issue is torque. EVs have lot more torque and it is easier to use, so they can slip more often which then leads to wear.

My understanding is that the torque control speed is much faster though, so it's actually difficult to get the tires to slip. I can't screech my tires in my EV, but it'll do 0-60 ridiculously fast.

Anecdotally, my Kia Niro EV goes through tyres a lot faster than the two equivalent internal combustion vehicles in the family.

That said, the Niro weighs ~50% more than the other vehicles, and it has significantly higher acceleration/braking, so I'd hazard it gets driven harder on average.


My computer had 16Mb in 1997, and it was lower-range but not the absolute bottom.

It looks like Anandtech listed 128Mb for $300 (not inflation adjusted) in 1997. It fell to $150 in 1998 and by 1999 you could buy it for $100.

So 512Mb RAM by the end of 1999 for ~$200 was plausible.


Cisco is doing great. Sun got acquired by Oracle. Oracle itself is also fine (apart from it is Oracle). Akamai is doing fine.

From the pure software side, Macromedia got acquired. RedHat was doing fine before IBM gobbled it up. But I honestly can't remember any other "picks and shovels" software companies from pre-dotcom.



The glass-in-the-ground people went spectacularly broke. I also suggest you look up the stock price chart for JDSU. On the software side, Ariba and Commerce One.

Yeah, hardware companies got hit hard. But dotcom also coincided with the de-industrialization era, with manufacturing moving out of the US, with a double whammy of commodization. So it's hard to disentangle the causes.

And then I can't really remember many Internet-focused software pick&shovels companies from that era. I was only starting my professional career at that time, though.


Qwest

3Com / US Robotics - dead

Nortel - dead

Global crossing - dead


Microsoft - doing fine

Netscape - dead (server) and/or dying (Mozilla)

Intel - almost dead

Palm - dead

Qualcomm - still around


INTC shot up >300% in the past 8 months and is now at its highest stock price ever, fwiw.

I guess Netscape counts. Palm produced devices, so it was not really picks&shovels.

Who else? Borland quietly withered away, but it had never been focused on tools specifically for the Internet.


Eh. Just start removing bike lanes. They're destroying businesses and making life worse for everyone.

And yes, I have numbers. In Seattle, the business receipts from areas with bike lanes declined faster than receipts from areas nearby that do NOT have bike lanes.

Correlation shmorellation.... I bet you were going to cite studies that were showing how bike lanes improved the business and how proprietors were surprised at the percentage of customers on bikes, right?


Yep, I have friends who ran small businesses who sold in cities (Seattle, Portland, SF) specifically because of how bike lanes destroyed their business.

People who are busy need to get around quickly and aren’t going to tolerate biking around. And it’s especially impractical with kids - not that this stops bike activists from trying to gaslight everyone into saying it’s totally possible and exactly the same effort. The bikes lanes almost always either displace traffic lanes or parking, so driving gets worse. And customers realize they have better things to do and alternative choices on where they spend money.

The bike lanes themselves are of course, often very poorly utilized. So traffic gets worse, businesses suffer, and it’s all for nothing. Now all these cities have left is intentionally crippling driving with low speed limits, speed bumps, and other hostile designs. It’s a way to try and claim that driving is no faster, even though it is trivial to keep driving fast and efficient.


mRNA is not a good example. If anything, it's a demonstration of why the Western capitalist model is superior to anything else. Most of the mRNA research was funded by venture capital as a high-risk high-reward investment.

In the world of government-sponsored research, mRNA likely would have been passed over in favor of funding research with more assured results.


AI model files can be rather large...

Hah. I used a dremel tool, some radiators, and a bit of thermal glue to make my Mikrotik switch work reliably: https://pics.ealex.net/share/UxeSf_AWHLIuc-qzK5zl7JIgQvQDAZh...

It's been like this for the last 3 years. And amazingly, I still can't find a 10G switch that is just as compact.


This is the kind of quality I want and expect from a website called Hacker News.

It's way more fun to see a real solution for a problem than it is to see someone complain that the cheapest available product is lacking in finesse.

Good stuff. Are you using RouterOS or SwOS on that little guy?

---

Related, here's a moneyshot of my Mikrotik Hex S that I've got in a portable rack: https://i.postimg.cc/cCJhfkv1/image.png

That very cheap gigabit copper SFP was running hotter than I'd like -- it probably would have been fine, but this rig is meant to run outside while camping off-grid in the sun in central Florida. So I put some heatsinks from my 3D printing stash on there and so far they've stayed put.

In this system, the Hex S is running OpenWRT and is configured as a PoE-powered managed switch. In that role, it switches packets and does VLAN stuff fine, and is probably a bit of overkill.

But it's also one of several layers of manual redundancy, which is important in that environment: One does not simply go to the store and buy special electronics in central Florida. So it isn't included in the travel kit, then it doesn't exist.

With one shell script, it stops being just-a-switch and becomes a router with all the usual services, plus SQM tricks and multiple WAN ports. The rig works well.


RouterOS, although I'm only using the switch-related functionality.

I found that the temperature of the 10G modules has almost no relation to their cost. So far, the least hot modules are 10G Tek ones that are also the cheapest. Mirkotik's 10G modules are more expensive, and they are also hotter.


One another thing to try: set the MTU to 9000. But don't do this on your main interface, or you'll get haunted by traffic being blackholed.

At home, I have separate VLANs for the 9k packets. It has a separate subnet (both V6 and V4), so it works perfectly. The devices on this VLAN use it directly if they can, and everything else goes through the router that sends proper ICMP "too big" messages.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: