I'm extremely happy after upgrading my network to 10gbit copper ethernet. It was much more expensive than I thought it should be, but worth it even if I only max it occasionally. Now I can easily fully saturate my 10gbit ethernet doing a first Time Machine backup or transferring files to my M.2 SSD NAS which saves me waiting rime and is satisfying to watch.
It's wild to me that 10gbit isn't the norm by now and tech people who should know better seem to think WiFi matches or even exceeds even 1gbit ethernet. My MBP connects to my WiFi7 setup(Ubiquiti E7) at a nominal 1.5-1.9gbit but Time Machine backups and file transfers are slower than plugging into 1gbit ethernet, probably in large part due to latency and retransmissions. Not to mention that ethernet works with near 100% reliability with dramatically less variation in speed and error rate.
It's wild to me Time Machine works on your network. Are you just doing "first backups" over and over again, or have you somehow achieved the very rare state where Time Machine can run for, say, a week at a time without falling over?
Sorry, this is snarky and off topic, but I'm nostalgic for the days when Time Machine "just worked".
For a very long time I thought Time Machine had become flaky, and I'm sure it's partially to blame, but with my current setup I've literally never observed it corrupt a backup and have to start over.
Before I was using one of the common Synology consumer NAS boxes that are often recommended. The NAS didn't report any errors with the drives or its own hardware, but at least once a month TM would glitch on at least one of my home laptops.
My new setup is an Asus FLASHSTOR 12 Pro Gen2 FS6812X. For a year now it's been running without a single apparent TM glitch while backing up multiple personal laptops and my work laptop. Sometimes I'm plugged in and sometimes I'm backing up over WiFi, but it's always worked.
I tried various recommended settings for the Synology and nothing helped so I strongly suspect that the Synology network protocol(SMB, AFP, etc.) implementations were either buggy themselves or at least not compatible with quirks in Apple's implementations. Synology->Asus fixed all my TM problems instantly and seemingly permanantly!
I can't remember the exact phrasing, but are you talking about the error message that essentially says:
"The Tardis is broken. Your backup has diverged into an entirely separate timeline, and I have no way of reconciling it. You may now sacrifice an entire weekend to do an initial backup again."?
I've been on a lucky streak for several years now, where I haven't gotten that one on any of my devices.
"Preparing backup..." taking an unreasonable amount of time is a regular occurrence, and some edge cases around adjusting TM backup size quotas aren't handled well. But other than that, TM has been working reasonably well for me to back up 10 TB over SMB to a Synology NAS.
My gripe is much more with Apple's abysmal support for SMB and NFS, especially after deprecating AFP. I've been back and forth between them over the years and over several OS versions, and their implementations for both are just terrible.
But over time SMB, for me, proved slightly more stable and performant, with the right tweaks in smb.conf, and authentication and permissions/ownership are easier to deal with than NFS, so I stuck with that.
I also yearn for the days where TM just worked, because somehow, the alternatives are even worse:
- Arq Backup does some things quite well, which is why I use it as part of my 3-2-1. But some of its bugs and implementation decisions just scream "hobby grade" to me.
- Kopia looks interesting, but it's not mature enough yet. Failed for me with absolutely cryptic error messages during repo init both times I tried it, with versions several months apart.
- Restic, Borg / Vorta: Not turnkey enough for me.
> "Preparing backup..." taking an unreasonable amount of time is a regular occurrence,
TM heavily throttles disk I/O used for backing up in order to ensure that normal user activity isn't affected. That makes it appear that TM is dramatically slower than you would expect which greatly annoys me. This becomes obvious after you run this command which will make both the preparing and transferring phases go closer to the theoretical speed you'd expect:
> TM heavily throttles disk I/O used for backing up
That makes sense, and I usually quite like that behavior. I barely ever notice an impact when backups are running.
However, this is happening every time on one machine (Intel iMac), and semi-regularly on another one (M3 MBP), after a fresh restart, giving mds_stores some time to settle down, and the most recent backup just hours ago, with no significant changes on disk since.
In a situation like that, I would expect the "Preparing backup..." stage to just take a second to create an APFS snapshot, and maybe a minute to diff that snapshot against the remote state. But not 10+ minutes.
But thank you for the hint about that sysctl parameter! I will certainly give this a try.
I've been using Time Machine for six months pointing to a network share on my TrueNAS box and it has worked fine. Sometimes a backup will fail when the Mac is taken off my home network (it doesn't play nice with Tailscale for whatever reason) but it will always work again if I tell it to retry the failed backup once I'm back on the local network.
> It's wild to me that 10gbit isn't the norm by now
I honestly just don't need it. Part of that is my ISP options top out around 1200Mbps, certainly. But I also just don't need that kind of speed inside my home. Streaming video needs at most 20Mbps or so, and I don't do much in the way of large file transfers. And when I do large file transfers, it's usually from or to the internet, and my home network is not the bottleneck there.
> It's wild to me that 10gbit isn't the norm by now
10G was too big of a step up from 1G. The expense and power required made it unattractive. Only recently have the interfaces for 10G over twisted pair become reasonably low power.
2.5G and even 5G are in a much better spot. It's where I recommend most people start as a default.
Yeah, 10GbE-over-copper switches are SO expensive. I bought a Ubiquiti enterprise switch for my home; 10Gb uplink and the copper ports are split between 2.5G and 1G. It's fine because only 1 or 2 clients can even talk 10G, and those are all across the house on Cat5 links anyway so they only negotiate to 2.5 even on a 10G port.
As much as I wanted to "future proof" by having a 24 port 10GbE switch... why? I'll just wait and buy one when I have a use for it.
> As much as I wanted to "future proof" by having a 24 port 10GbE switch... why? I'll just wait and buy one when I have a use for it.
As much as I enjoy looking at those wiring cabinets where every cable is cut to exactly the right length to reach a single port on the switch, this is why I prefer to leave an amount of slack in the wiring: It's good to be able to pull different wires to different switches depending on your needs.
One small high speed switch with enough ports for the couple of devices that can use it. One gigabit switch with a lot of ports to provide connectivity everywhere else.
This is what I ended up doing. A 2.5G switch for the few devices that can use it and a 1Gbit+PoE switch for all the other PoE and 1Gbit and less devices.
It does, actually. I dismissed it originally because I don't need most of the capacity, but considering I have no 10GbE copper ports available, and my home is being remodeled and I'm running some new ethernet to the other side of the house, I could take advantage of these ports now.
Also, I didn't realize my UDM Pro was kneecapped at 3-4Gbps IDS/IPS throughput. I think when I bought it I only had a 1 gig internet connection so it was overkill, but now I've got 10gig. It didn't even cross my mind that I had this bottleneck.
5G barely exists and mostly is the same price as 10Gb. 2.6Gb is taking off and mostly replacing 1Gb, but try to find a 5Gb switch, there really arent any, and most appear to be 10Gb switches with 5Gb PHY in them.
>It's wild to me that 10gbit isn't the norm by now and tech people who should know better seem to think WiFi matches or even exceeds even 1gbit ethernet.
2 things here. Upthread is the discussion about the old 10GBE modules that would constantly turn off due to overheating. Thats left a sour taste in a lot of peoples mouths.
I dont have anything in my home network that matters enough to have 10GBE anywhere. If I did, I would just get fibre. My wireless is fine for most purposes except some HD streaming, and plain old 1 gig works fine there.
Getting Time Machine to work reliably over a network is painful, even the old Apple-made Airport with built in TM stopped working twice a year.
However, I have multiple Macs where I simply have a USB-C laptop SSD attached for Time Machine and they have worked without issue for years. These laptop SSDs come in huge sizes nowadays, and you don't need an especially performant one, so they can be pretty cheap.
Heh it's honestly wild to me anyone needs over a gig. My work has a one gig fiber line supporting hundreds of employees and usage generally remains below 10%.
The high expense of 10gig is, in part, because it isn't widely necessary and the people buying it are willing to pay extra.
> The high expense of 10gig is, in part, because it isn't widely necessary and the people buying it are willing to pay extra.
I think the price has more to do with where you live and how the market is structured than how necessary it is. In Japan where there is competition between ISPs, I pay about $40/mo for 10Gbps.
Routers have also come down in price to where they are pretty affordable for consumers. I use a Ubiquiti Cloud Gateway Fiber [1] which has three 10Gbs ports (two SPF+, one 10GbE) for $279. A TP-Link router [2] with an upstream 10GbE port and 2.5GbE LAN ports and Wi-Fi 7 is about $140. 2.5 GbE NICs have become cheap and ubiquitous and could commonly be found on $150 mini pcs (before memory and SSD prices went crazy).
Yeah it's more than more than most people need, but I definitely appreciate having the increased speed when downloading 50GB games, uploading 200GB files to YouTube, or backing up files to the cloud. I've probably never maxed out the full 10Gbps, but exceeding 1Gbps is pretty easy in relatively common use cases.
And if you don't need a router, you can get 10gbit ports for much cheaper than that. The Mikrotik CRS305-1G-4S gives you one copper 1gbit port, four 10gbit-capable SFP+ cages, and is ~150USD [0]. Their whole lineup of switches with SFP+ ports can seen here [1].
I put 5Gbit internet into my home (fiber) to build my startup. I'm processing terabytes of data. I have over 100TB of storage in my basement. I can regularly saturate my internet connection. That said, I remember well when a 1Gbit connection provided enough bandwidth for a 500-person call center for daily workloads (back about 10 years ago).
That puts you in an extreme minority, even amongst enterprise businesses. Many medium sized enterprises have storage that looks like "a couple dozen TB total" for hundreds of staff.
Having 100 TB of storage in your home basement is an even more extreme minority than that. ;)
A gigabit connection is more than enough for a 500-person call center today.
Agreed. We had IP phones with 100 Mbps switches in between most of our computers and the rest of the network for a long time and very few people noticed. It'd only really be when I was installing a system upgrade or something, and I'd be like "man, it'd be nice if this didn't take an extra two minutes". For normal web access, 100 Mbps and 1000 Gbps aren't really discernable, until you're downloading large files. A lot of 4K streaming videos though, you'll start to feel it quite a bit faster.
And then hilariously, once you go above a gig, the reality is most sites won't serve them to you any faster than that anyways.
I found the nicest thing about fiber is I can hit over a gb/s uploading, which is often much more critical-path for whatever I’m doing than a download.
If that 500-person call center is a BPO, they'd have trouble with today's data-heavy workloads. I do agree, though. I'm in an extreme minority when it comes to this much bandwidth in my home.
Depends a lot on your work type and your prior exposure. If you only work "locally" and upload/download rarely, you may be way less demanding of your network than if you actually do distributed work with remote storage, high-bandwidth communicating tasks, etc.
Over 20 years ago, I was used to having 1g LAN for basic workstations and laptops in an office setting and probably 10-20g uplink from the building (shared by hundreds of staff). I also used 1g at home for my very small LAN between laptop, desktop, and SAN functions. But, my home ISP links were often terrible, such as 128k ADSL or even just a tethered GPRS phone at some points.
You end up with entirely different work styles when you have these different resource constraints.
1Gb Internet service seems low these days, much less 1Gb LAN. I have 3Gb Google Fiber service and actually get 2+ for individual downloads from some internet services like Steam. Even at 2Gb it's annoying to wait tens of minutes for 100+ GiB games to download. If I go on vacation I come home with 10s of GiBs of photos and videos on multiple devices that start syncing with cloud storage.
During the day I need to pull large data files from the work VPN so it's nice that that can happen at full speed even when Steam and movie streaming are also at full throttle. Combine that with backups and moving various files back and forth to my NAS and I'm very happy to have 10Gb local wiring.
This is one of those things where I just have to express that a lot of the HN crowd is entirely divorced from the reality the rest of the world experiences. ;)
Nearly nobody has multigig anything in the home, a probably surprisingly large percentage of business networking is 1gig LAN or less. And most people would not notice the difference if they did.
I am glad it works for you, but everyone else most certainly doesn't need it. (Yet.)
Personally, I do try for mostly gigabit in my home, because I do selfhost, but I have a ~800 Mbps download service (200 Mbps upload, it's asymmetric) that was only 500 Mbps when I signed up. And to be honest most of my patch cables are CAT5e because I'm cheap. I do make sure to run CAT6 through walls though because I don't want to ever have to do it again.
Also, I used to have Astound, and I feel so much sympathy for Google Fiber customers, you have no idea what's coming. If you thought Google had a reputation for bad customer service... just wait!
I disagree. I pay $120 a month for 5Gbps symmetric connection. I could upgrade that to 10Gbps for 2x, but there is no reason at this point. Even the local max from the cable company is more than 1Gbps, 1.2Gbps down / ~300Mbps up for around $80. Everything is streaming now. I work from home, on video calls. My better half will be watching something the on AppleTV streaming, the kid will be doing the same. I have Backblaze running to do backups to the cloud. 3 different laptops that will run TimeMachine backups to the NAS. The AppleTVs also have the Infuse app on them to stream local video files from the NAS. The security cameras are a constant 60Mpbs 24/7/365 to the NVR. The laptops can push a gig wireless and 2.5Gbps when plugged into the Thunderbolt docks. It is not clear that I need 10Gbps everywhere, but it has its uses. The NAS is at 10G. The link from the main switch to the router is 10G. The 3 APs in the house are at 2.5G and the 2 outside are at 1G. There was a noticeable difference when after I right sized the shared links paths up from 1G. When I say noticeable it both perceived and measured. I used to work doing switch bring up and competitive testing so I have a pretty good idea how this all comes together. Given that a reasonably cheap set of APs can now handle clients at above 1G and internet speeds in some areas being above 1G, moving to at least 2.5G in places is useful and not divorced from reality. I am in tech, but I have help my not tech friends upgrade APs, et.al. for their normal everyday home use cases and they have all been quite happy with the change.
Not being divorced from reality is the only reason I have not dropped $5K on the new Dream Machine Beast that was just released and have not swapped out my Enterprise 48 PoE (1st gen.) for the newest version that has 12 10G-BaseT ports.
I have it more for the fast nas access and being able to treat nas disks as more or less the same performance as if they were directly sata in my machine. Significantly less so about the external network aspect.
> It was much more expensive than I thought it should be
For 10G with copper, short runs will work fine on most anything (10m), with longer working ok on cat-5e cable (20 - 30m or so).
So, if you have existing wiring, especially in wall, just give it a try first, before gutting things. Most 10g ethernet adapters give you BER stats to see the error rate.
And, if there are issues, multi-gig may be an option, where it will drop down to 5 or 2.5 gbps, which is same as 10G, but with a reduced symbol table to handle the lower SNR.
You can definitely get stable 1gig throughput through WiFi. Doesn’t even need WiFi 7/6E - possible on 6. Ran a WiFi bridge like that for years and - no packet loss, consistent gig through and maybe +1ms latency
The gotcha is both ends needs to be good radios. So a router to router bridge tends to work better than router to end user device. Also had near LoS which presumably helped a ton
It only made sense in the SPA way of working. Allowing the history to be updated would allow the browser's default navigation to work. Outside of SPA type of sites, it was only ever going to be abused.
I'm a backend dev and I'm always hearing about how LLMs are dramatically better at frontend because of much more available training data etc. Maybe my perspective isn't as skewed as I've been led to believe and LLMs need close supervision and rework of their output there too.
I would trust an LLM with backend much more than front-end. Especially if we're talking monolithic and good type system. Ideally compiled. When I say trust I mean it's not going to break the user-facing API contract, probably, even if internally it's a mess. If you let it do front-end blind, it will almost certainly embarrass you if you care at all about user experience.
If you do backend blind, it will also almost certainly embarrass you. I’ve never had an experience beyond the most basic crud app where I didn’t have to somehow use my engineering experience to dig it out of a hole.
Works mostly fine for me on Rust backends. As long as I'm willing to accept tight contracts at the edges with spaghetti in the middle, or otherwise gate approval for everything it does.
If I want good abstractions, sure, I can set up approvals and babysit it with reprompting, because it will do stupid things that an experienced engineer wouldn't. But the spaghetti also works in the sense that it takes the input types and largely correctly maps them to the output types.
That doesn't emarrass me with customers because they never see the internals. On front-end, obviously they will see and experience whatever abomination it cooks up directly.
Toyota just had three large EV announcements and they are putting large incentives on some of them. Feels like they're serious about it and with so many others exiting the EV market lately they may have timed it well.
Kagi is the ONLY search engine I know that currently doesn’t suck and I highly recommend it to anyone who will listen. I would cancel almost any other subscription before Kagi.
> Since if Kagi does search as well as the Google of old, and I can adjust its searches to prioritise results from known good (to me) sites, then that probably is worth paying for
As I was saying in another thread Kagi is what Google would have been if they had kept improving instead of transitioning to enshitifying ~10yrs ago. If I had to pick I’d keep it over Netflix.
I’ve also been using Kagi for the past ~2 years. At first I would always also search Google to see if I was missing out on better results, but after a couple of months I no longer bothered because Kagi did better 99% of the time. It’s worth the cost, I’d keep it over Netflix if I had to choose.
TLDR Kagi is what Google would have been if they had kept improving instead of transitioning to enshitifying ~10yrs ago.
It only took me a couple of weeks to go from "I can't ever imagine paying for search" to "I will never use free search again". Kagi is the best bang for buck of any subscription I pay for.
App stores may reduce many of my freedoms, but they also provide me with some other freedoms by limiting the power of big tech companies over me, and the tradeoffs are different for my phone compared to a PC. For example Apple uses their big stick to ensure that apps can't simply refuse to work if you enable privacy setting that limit them. If Facebook refuses to work until you give it full access to your photos and exact location even when the app isn't running the realistic outcome will be that everyone will just give them what they want rather than not using the service. I remember years ago on Android that Google Maps would refuse to work if I didn't allow it to access my location when it wasn't running, and I never want to go back to that world.
> For example Apple uses their big stick to ensure that apps can't simply refuse to work if you enable privacy setting that limit them. If Facebook refuses to work until you give it full access to your photos and exact location even when the app isn't running the realistic outcome will be that everyone will just give them what they want rather than not using the service
Apple also stops you from installing third-party apps for the service that circumvent those and other limitations. In an open system you can intercept the app's requests and feed it fake responses, spoof your photo album, GPS, whatever. They can try to detect spoofing, but at the cost of making their services flaky for normal users. This is a cat-and-mouse game that the mice (that's you) win. Except you can't play it on an iPhone, because it breaks the service's (probably illegal) Terms of Service, and Apple will use their Big Stick to ensure nobody can commit acts that risk their partners' business models.
One of the main reasons I use Postgres is I've rarely(never?) seen an article like this posted about it. Every time I've touched MySQL I've found a new footgun.
MySQL is the PHP of databases. It was free, easy to setup and had the spotlight at the right time. The bad decisions that are baked into MySQL are plenty and really sad (like the botched utf8 defaults, the myisam storage engine, strange stuff around replication and mucu more)
I don't have all the details anymore. But one of the non obvious things for me was that foreign key cascades where not in the binlogs. I also think that some changes in the database layout could lead to strange things on the replicas.
> I also think that some changes in the database layout could lead to strange things on the replicas.
I've been using MySQL for 23 years and have no idea what you're referring to here, sorry. But it's not like other DBs have quirk-free replication either. Postgres logical replication doesn't handle DDL at all, for example.
I appreciate you carrying the torch for MySQL here, since most opinions are based on setups over a decade old, with little to no bearing on how it runs today
It still blows my mind that they called that crappy partial buggy characterset “utf8”. Then later came out with actual utf8 and called it “utf8mb4”. Makes no sense
They should have addressed it much earlier, but it makes way more sense in historical context: when MySQL added utf8 support in early 2003, the utf8 standard originally permitted up to 6 bytes per char at that time. This had excessive storage implications, and emoji weren't in widespread use at all at the time. 3 bytes were sufficient to store the majority of chars in use at that time, so that's what they went with.
And once they made that choice, there was no easy fix that was also backwards-compatible. MySQL avoids breaking binary data compatibility across upgrades: aside from a few special cases like fractional time support, an upgrade doesn't require rebuilding any of your tables.
Your explanation makes it sound like an incredibly stupid decision. I imagine what you're getting at is that 3 bytes were/are sufficient for the basic multilingual plane, which is incidentally also what can be represented in a single utf-16 byte pair. So they imposed the same limitation as utf-16 had on utf-8. This would have seemed logical in a world where utf-16 was the default and utf-8 was some annoying exception they had to get out of the way.
OK, but that makes perfect sense given utf-16 was actually quite widespread in 2003! For example, Windows APIs, MS SQL Server, JavaScript (off the top of my head)... these all still primarily use utf-16 today even. And MySQL also supports utf-16 among many other charsets.
There wasn't a clear winner in utf-8 at the time, especially given its 6-byte-max representation back then. Memory and storage were a lot more limited.
And yes while 6 bytes was the maximum, a bunch of critical paths (e.g. sorting logic) in old MySQL required allocating a worst-case buffer size, so this would have been prohibitively expensive.
This still makes no sense. The UTF-8 standard was adopted really in 1998-ish and the standard was already variable using 1 to 4 bytes. MySQL 4.1, which introduced the utf8 charset, was released in 2004.
Even if there were no codepoints in the 4-byte range yet, they could and should have implemented it anyway. It literally does not take any more storage because it is a variable width encoding.
> The UTF-8 standard was adopted really in 1998-ish and the standard was already variable using 1 to 4 bytes.
No, it was 1 to 6 bytes until RFC 3629 (Nov 2003). AFAIK development of MySQL 4.1 began prior to that, despite the release not happening until afterwards.
Again, they absolutely should have addressed it sooner. But people make mistakes, especially as we're talking about a venture-funded startup in the years right after the dot-com crash.
> It literally does not take any more storage because it is a variable width encoding.
I already addressed that in my previous comment: in old versions of MySQL, a number of critical code paths required allocating worst-case buffer sizes, or accounting for worst-case value lengths in indexes, etc. So if a charset allows 6 bytes per character, that means multiplying max length by 6, in order to handle the pathological case.
> In my view nonnegative real numbers have good physical representations: amount, size, distance, position
I'm not a physicist, but do we actually know if distance and time can vary continuously or is there a smallest unit of distance or time? A physics equation might tell you a particle moves Pi meters in sqrt(2) seconds but are those even possible physical quantities? I'm not sure if we even know for sure whether the universe's size is infinite or finite?
I am not a physicist either but isn't the smallest unit of distance planck's length?
I searched what's the smallest time unit and its also planck's time constant
The smallest unit of time is called Planck time, which is approximately 5.39 × 10⁻⁴⁴ seconds. It is theorized to be the shortest meaningful time interval that can be measured. Wikipedia (Pasted from DDG AI)
From what I can tell there can be smaller time units from these but they would be impossible to measure.
I also don't know but from this I feel as if heisenberg's principle (where you can only accurately know either velocity or position but not both at the same time) might also be applicable here?
> A physics equation might tell you a particle moves Pi meters in sqrt(2) seconds but are those even possible physical quantities
To be honest, once again (I am not a physicist) but Pi is the circumference/diameter and sqrt(2) is the length of an isoceles triangle ,I feel as if a set of experiment could be generated where a particle does indeed move pi meters in sqrt(2) meters but the thing is that both of them would be approximations in the real world.
Pi in a real world sense made up of the planck's length/planck's time in my opinion can only be measured so much. So would the sqrt(2)
The thing is, it might take infinitely minute changes which would be unmeasurable.
So what I am trying to say is that suppose we have infinite number of an machine which can have such particle which moves pi meters in sqrt(2) seconds with only infinitely minute differences. There might be one which might be accurate within all the infinite
But we literally can't know which because we physically can't measure after a point.
I think that these definitions of pi / sqrt 2 also lie in a more abstract world with useful approximations in the real world which can also change given on how much error might be okay (I have seen some jokes about engineers approximating pi as 3)
They are useful constructs which actually help in practical/engineering purposes while they still lie in a line which we can comprehend (we can put pi between 3 and 4, we can comprehend it)
Now imaginary numbers are useful constructs too and everything with practical engineering usecases too but the reason that such discussion is happening in my opinion is that they aren't intuitive because they aren't between two real numbers but rather they have a completely new line of axis/imaginary line because they don't lie anyone in the real number plane.
It's kind of scary for me to imagine what the first person who thought of imaginary numbers to be a line perpendicular to real numbers think.
It literally opened up a new dimension for mathematics and introduced plane/graph like properties and one can imagine circles/squares and so many other shapes in now pure numbers/algebra.
e^(pi * i) = -1 is one of the most (if not the most) elegant equation for a reason.
Planck's length has absolutely no known physical significance. It is just a combination of fundamental constants that happens to have the dimension of a length.
The so called Planck system of units, proposed by him in 1899, when he computed what is now called Planck's constant, is just an example of how a system of fundamental units must not be defined. To explain exactly the mistakes done by Planck then requires a longer space than here.
Unfortunately, probably because most textbooks of physics do an extremely poor job in explaining the foundation of physics, which is the theory of the measurement of the physical quantities, most people are not aware that the Planck system of units is completely bogus, like also a few other similar attempts, like the Stoney system of units.
Thus far too often one can see on Internet people talking about the "Planck units" as if they would mean something.
Unlike with the "Planck units", there are fundamental constants that really mean something. For instance, the so called "constant of fine structure", a.k.a. Sommerfeld's constant, is the ratio between the speed of an electron and the speed of light, when the electron moves on the orbit corresponding to the lowest total energy around a nucleus of infinite mass.
This "constant of fine structure" is a measure of the strength of the electromagnetic interaction, like the Newtonian constant of gravitation is a measure of the strength of the gravitational interaction. The Planck length and time are derived from the Newtonian constant of gravitation, and they are so small because the gravitational interaction is much weaker, but they do not correspond to any quantities that could characterize a physical system.
For now, there exists no evidence whatsoever of some minimum value for length or time, i.e. there exists no evidence that time and length are not indefinitely divisible.
It's wild to me that 10gbit isn't the norm by now and tech people who should know better seem to think WiFi matches or even exceeds even 1gbit ethernet. My MBP connects to my WiFi7 setup(Ubiquiti E7) at a nominal 1.5-1.9gbit but Time Machine backups and file transfers are slower than plugging into 1gbit ethernet, probably in large part due to latency and retransmissions. Not to mention that ethernet works with near 100% reliability with dramatically less variation in speed and error rate.
reply