Slide with text: “Rust teams at Google are as productive as ones using Go, and more than twice as productive as teams using C++.”
In small print it says the data is collected over 2022 and 2023.
Slide with text: “Rust teams at Google are as productive as ones using Go, and more than twice as productive as teams using C++.”
In small print it says the data is collected over 2022 and 2023.
Right, it’s essentially the same argument as strong vs. weak typing. The weak typing proponents say JavaScript is best, because you can just write anything and you don’t need to worry about all those pesky types getting in your way. The strong typing proponents (which if it’s not obvious I am one of) point out that you can write incorrect code quickly in just about any language, but writing correct code is much harder, and the cost of correcting code increases the later the mistake is found. Errors that can’t even be written are better than errors that are found at compile time which are better than errors that are reliably caught at runtime, which are all infinitely better than errors that only randomly appear under very specific circumstances.
That is why many people switched to using TypeScript for their websites instead of JavaScript, because even though you have to spend more time putting type annotations on everything, and at the end of the day at runtime TypeScript is literally just JavaScript, the errors it lets you find at compile time instead of runtime make the effort necessary to include those types worth it. Same thing applies with Rust vs. Go. Yes it requires more thinking up front when you’re writing Rust code, and yes it might take you longer to write that code, but it’s also going to be correct code you can be confident in and not have a bunch of ticking timebombs waiting in it that you don’t even know about.
An extra 30 minutes spent having to think about a dozen lines of code, is infinitely preferable to spending 3 hours pouring over stack traces and single stepping debuggers to find that one subtle mistake you made.
I totally agree, though I think it’s worth adding:
The advantages of static types is not just finding bugs (though it does do that quite well). It also massively helps with productivity because a) types are now documented, b) you can use code intelligence tools like renaming variables, go-to-definition, find-references, etc. (assuming you use a good editor/IDE).
In general stronger types are better but I do think there is a point at which the effort of getting the types right is too high to be worth the benefit. I would say Rust hasn’t reached that point, but if you look at formal verification languages like Dafny, it’s pretty clear that you wouldn’t want to use that except in extreme circumstances. Similarly I think the ability to use an
any
ordynamic
escape hatch is quite useful, even if it should be used very sparingly.You are right. But I think similar secondary benefits also come from using the borrow checker. Rust developers, by necessity, try to avoid using circular references and prefer immutability where they can. Both of these are advantages because they tend to make for systems that are easier to understand and are easier to maintain.
Yeah I agree. The borrow checker definitely pushes you to write less buggy code.
Absolutely! Types are as much about providing the programmer with information as they are the compiler. A well typed and designed API conveys so much useful information. It’s why it’s mildly infuriating when I see functions that look like something from C where you’ll see like:
pub fn draw_circle(x: i8, y: i8, red: u8, green, u8, blue: u8, r: u8) -> bool {
rather than a better strongly typed version like:
type Point = Vec2<i8>; type Color = Vec3<u8>; type Radius = NonZero<u8>; pub fn draw_circle(point: Point, color: Color, r: Radius) -> Result<()> {
I disagree with this, I don’t think those are ever necessary assuming a powerful enough type system. Function arguments should always have a defined type, even if it’s using dynamic dispatch. If you just want to not have to specify the type on a local,
let
bindings where you don’t explicitly define the type are fine, but even in that case it still has a type, you’re just letting the compiler derive it for you (and if it can’t it will error).You can go to definition / find references / rename for dynamically typed languages too.
E.g. https://github.com/palantir/python-language-server
Without static type annotations you can only make best effort guesses that are sometimes right. Better than nothing but not remotely the same as actual static types. The LSP you linked works best when you use static type annotations.
Also I would really recommend Pylance over that if you can - it’s much better but is also closed source unfortunately.
Why would it just be best effort? To find references for a specific thing, it still would parse an AST, find the current scope, see it’s imported from some module, find other imports of the module, etc.
if random() > 0.5: x = 2 else: x = "hello"
Where is the definition of x? What is the type of x? If you can’t identify it, neither can the LSP.
This kind of thing actually happens when implementing interfaces, inheritance, etc. Thus, LSPs in dynamic languages are best effort both theoretically and in practice.
Types are not necessary at all.
Saying “x is defined somewhere in the entire program” isn’t satisfactory to many users. Also, you didn’t tell me what type x has. Can I do
x + 5
?Tbf this example can be deducted as
string | int
just fine.The real problem is when you start using runtime reflection, like
getattr(obj, "x")
def get_price(x): return x.prize
Ok imagine you are a LSP. What type is
x
? Isprize
a typo? What auto-complete options would you return forx.
?I didn’t say types. I said find references / go to definition / rename.
How are you going to find references to
prize
, go to its definition or rename it without knowing what typex
is? It’s impossible without static types.It breaks down when you do runtime reflection, like
getattr(obj, "x")
.Preach 🙏