Most of these are treated as backwards incompatibilities by the stdlib folks and most users.
The one that's not is that adding a field or method could break importing code. The scenario here is that you embed two types, one from the upgrading lib and one not, and the upgrading lib adds a name that collides with one in the other type. Rather than pick a winner in that conflict (last embed in the type definition wins, say), Go raises an error so the human can explicitly resolve it by, e.g. renaming one of the conflicting names if you control the code (gorename helps!) or changing one of the embeds to a regular member.
This can happen, but I haven't seen it occur in the time I've been around the Go community, and never adding methods doesn't make sense, and silently resolving those conflicts could _still_ cause a backcompat issue and seems worse than the status quo, and if you do hit the conflict it's often resolvable (gorename!), so I think calling it backwards compatible to add a method/field is reasonable (as well as what everyone already does).
The practical backwards compatibility things that I have seen come up tend to have less to do with folks hitting cases like that than changes that are theoretically right but expose code that was buggy but happened to work before, e.g. programs that used to rely on racy map accesses or wrong cgo pointer usage or invalid Less functions in sort working, or trickiness around net protocol interfaces and buggy clients, or parallel tests exposing something.
> The practical backwards compatibility things that I have seen come up
Yes, I'm not really talking about practical problems :) This article is mainly a response to various people claiming, "backwards compatibility is easy, it's a breakage if you are breaking the build".
FWIW, most of the scenarios you mention I wouldn't really call a backwards compatibility issue either, though. I'd say they are bugs that are exposed by a change in API.
Yeah. I'm trying to say, sure, that's a clever note on what can, in very precise circumstances, happen whenever any field/method is added. But as working engineers, focusing on the practical problems is, well, the practical thing to do.
If what you're thinking is "well in that case, people should agree that adding methods/fields is OK," I think they do, and, e.g. https://golang.org/doc/go1compat spells out that methods may be added and how that can interact with multiple embedding.
Practically, I don't want my upstreams to stop adding methods or fields to structs or to create an extra step they have to go through before adding one. That slows down upstream dev; it's not a net win for me as a downstream or a cost I want to impose on (sometimes-unpaid) OSS maintainers.
I also don't want a tool to flag every upstream field/method addition as a breaking change, because that's noise to me in the common case where the change doesn't break _my_ program. Really, if I want a back-compat test, I should upgrade and try to run my code's tests since that can shake out many more kinds of break, including ones due to my bugs/my dependencies on undoc'd stuff. I should do it whether the API was tweaked or not, because any change can change behavior.
Again, you did make a clever observation of a potential build break; I'm just saying I don't think that Go library authors need to spend more time worrying about potential name collisions with multiple embedding, or that people should specifically build workflows and tools around it.
> But as working engineers, focusing on the practical problems is, well, the practical thing to do.
Well, yeah, I'm an engineer too (just educated as a mathematician) :)
FWIW, I tried to solve an engineering problem, namely "how can we get the advantages of SemVer, while working around human deficiencies in setting and maintaining them". Or "why should a human need to figure out a version number, if a computer can do it for me". But, the thing is, that this turned out not to actually be an engineering problem, but a political problem; if there is no obviously correct interpretation of "breaking change", then for a tool to be acceptable, you'd need to get people to accept it's limited implementation of "breaking". I simply wasn't willing to put up with this political challenge (others where) :)
> If what you're thinking is "well in that case, people should agree that adding methods/fields is OK," I think they do, and, e.g. https://golang.org/doc/go1compat spells out that methods may be added and how that can interact with multiple embedding.
True. AFAIR that section was added around the time I wrote that article (I believe it was in tip a couple of days beforehand).
> Practically, I don't want my upstreams to stop adding methods or fields to structs or to create an extra step they have to go through before adding one. That slows down upstream dev; it's not a net win for me as a downstream or a cost I want to impose on (sometimes-unpaid) OSS maintainers.
I agree.
> I also don't want a tool to flag every upstream field/method addition as a breaking change, because that's noise to me in the common case where the change doesn't break _my_ program. Really, if I want a back-compat test, I should upgrade and try to run my code's tests since that can shake out many more kinds of break, including ones due to my bugs/my dependencies on undoc'd stuff
But you, as the author of a package, are not really the target audience either. You are able to fix compilation bugs, but your users likely are not.
But yeah, you probably also just aren't the addressee of my article. It is mostly addressed to people claiming that versioning in go is a solved problem and SemVer is the solution. It's not.
Your observation is pretty much the point of the article; a breakage happens, iff my code doesn't work anymore with an upgraded dependency, not more, nor less.
> Again, you did make a clever observation of a potential build break; I'm just saying I don't think that Go library authors need to spend more time worrying about potential name collisions with multiple embedding, or that people should specifically build workflows and tools around it.
Here, I disagree. Tooling to work around breakages would be excellent. As you mentioned yourself, for most breakages you just have to make the compiler happy and for most breakages it's pretty trivial to figure out what's needed. And at that point, there really should be a tool to do the job; after all, you should never send a human to do a computer's job. :)
Yep! Point is, we all are, so we all have to worry about how things break in practice, not only about the theoretical model of compatibility.
And it turns out we break each others' code lots of ways, sometimes even just changing behavior without touching the API. If I had a tool that detected certain build-time problems and worked around others, I could still end up upgrading to a new version of a library that breaks my product. So we end up with vendoring and such where product maintainers sort it out, and probably will continue to need some humans in the loop as long as releases have bugs.
I do think there's plenty to do on the larger issue of compatibility even if not specifically focused on this particular build-time break. There are tools I'd love to see, e.g. to test my program with its deps updated and maybe even do something git-bisect-ish to find just where things went wrong. (Node has something along those lines named Greenkeeper.) Peter Bourgon's got a group working on Go package management, and there's been work done, both practical (e.g. the tools and practices at https://peter.bourgon.org/go-best-practices-2016/#dependency...) and theoretical (https://research.swtch.com/version-sat). Useful progress and interesting stuff.
The one that's not is that adding a field or method could break importing code. The scenario here is that you embed two types, one from the upgrading lib and one not, and the upgrading lib adds a name that collides with one in the other type. Rather than pick a winner in that conflict (last embed in the type definition wins, say), Go raises an error so the human can explicitly resolve it by, e.g. renaming one of the conflicting names if you control the code (gorename helps!) or changing one of the embeds to a regular member.
This can happen, but I haven't seen it occur in the time I've been around the Go community, and never adding methods doesn't make sense, and silently resolving those conflicts could _still_ cause a backcompat issue and seems worse than the status quo, and if you do hit the conflict it's often resolvable (gorename!), so I think calling it backwards compatible to add a method/field is reasonable (as well as what everyone already does).
The practical backwards compatibility things that I have seen come up tend to have less to do with folks hitting cases like that than changes that are theoretically right but expose code that was buggy but happened to work before, e.g. programs that used to rely on racy map accesses or wrong cgo pointer usage or invalid Less functions in sort working, or trickiness around net protocol interfaces and buggy clients, or parallel tests exposing something.