• 0 Posts
  • 32 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle




  • Kotlin is a really nice language with plenty of users, good tooling support, gets rid of a lot of the boilerplate that older languages have, and it instills many good practices early on (most variables are immutable unless specified otherwise, types are not nullable by default unless specified otherwise, etc)

    But to get the most “bang for your buck” early on, you can’t beat JavaScript (with TypeScript to help you make sense of your codebase as it keeps changing and growing).

    You will probably want to develop stuff that has some user interface and you’ll want to show it to people, and there is no better platform for that than the web. And JS is by far the most supported language on the web.

    And the browser devtools are right there, an indispensable tool.







  • But why bother with creating a new language, and duplicating all the features your language already has, in a weird way?

    If I want a list of UI items based on an array of some data, I can just do items.map(item => 〈Item key={item.id} item={item} /〉), using the normal map function that’s already part of the language.

    Or I can use a function, e.g. items.map(item => renderItem(item, otherData)) etc.

    JSX itself is a very thin layer that translates to normal function calls.






  • realharo@lemm.eetoProgramming@programming.dev*Permanently Deleted*
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    1 year ago

    On one hand, this is definitely a gap, on the other hand, you are very unlikely to run into it in practice.

    The whole “pass an array/object into some function that will mutate it for you” pattern is not very popular in JS , you are much more likely to encounter code that just gives you a new array as a return value and treats its arguments as read-only.

    If you validate your data at the boundaries where it enters your system (e.g. incoming JSON from HTTP responses), TypeScript is plenty good enough for almost all practical uses.


  • realharo@lemm.eetoRust@programming.devThe ???? operator
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    But that’s not the case here, seeing as they have

    if self.len() >= MAX_STACK_ALLOCATION {
        return with_nix_path_allocating(self, f);
    }
    

    in the code of with_nix_path. And I think they still could’ve made it return the value instead of calling the passed in function, by using something like

    enum NixPathValue {
        Short(MaybeUninitᐸ[u8; 1024]>, usize),
        Long(CString)
    }
    
    impl NixPathValue {
        fn as_c_str(&self) -> &CStr {
            // ...
    
    impl NixPath for [u8] {
        fn to_nix_path(&self) -> ResultᐸNixPathValue> {
            // return Short(buf, self.len()) for short paths, and perform all checks here,
            // so that NixPathValue.as_c_str can then use CStr::from_bytes_with_nul_unchecked
    

    But I don’t know what performance implications that would have, and whether the difference would matter at all. Would there be an unnecessary copy? Would the compiler optimize it out? etc.

    Also, from a maintainability standpoint, the context through which the library authors need to manually ensure all the unsafe code is used correctly would be slightly larger.

    As a user of a library, I would still prefer all that over the nesting.


  • realharo@lemm.eetoRust@programming.devThe ???? operator
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    I think the issue with this is that the code (https://docs.rs/nix/0.27.1/src/nix/lib.rs.html#297) allocates a fixed-size buffer on the stack in order to add a terminating zero to the end of the path copied into it. So it just gives you a reference into that buffer, which can’t outlive the function call.

    They do also have a with_nix_path_allocating function (https://docs.rs/nix/0.27.1/src/nix/lib.rs.html#332) that just gives you a CString that owns its buffer on the heap, so there must be some reason why they went this design. Maybe premature optimization? Maybe it actually makes a difference? 🤔

    They could have just returned the buffer via some wrapper that owns it and has the as_cstr function on it, but that would have resulted in a copy, so I’m not sure if it would have still achieved what they are trying to achieve here. I wonder if they ran some benchmarks on all this stuff, or they’re just writing what they think will be fast.