This is a pretty good idea, my wife dual boots and I’ll suggest it to her as Windows keeps trashing the EFI partition.
This is a pretty good idea, my wife dual boots and I’ll suggest it to her as Windows keeps trashing the EFI partition.
This would work but assumes the primary use of the machine is Windows and derates your performance under Linux significantly due to USB speeds. Even if you’re storing your data on the Windows HDD, NTFS drivers are dog slow compared to EXT4 and other *nix filesystems.
Also some BIOSes are a pain to get to boot off removable drives reliably so it really depends on what your machine is.
I’ve used Linux as a primary dev system for well over a decade now, and with the current state of Windows I’d really recommend just taking the leap, keep your Windows box if you need Windows software and build a dedicated Linux workstation.
You’re missing one:
Aside from “lightweight apps in VM” this is the only solution I use now. (Unless you count Proton, but having Steam games Just Work barely feels like a “solution” as it requires zero effort on my part)
I don’t even trust Windows to dual boot off a separate disk without trying to break something anymore.
Some of the concepts in this book really stuck with me, but I had no idea what the title was! Thanks!
“Some days you’re the original, some days you’re the copy” or something like that
He just said “hairy ass”, well after hearing that I hope it’s “he” anyways for his own sake
Am farmer, can confirm. I also have my chequebook with me… Non-farmers, when was the last time you wrote a cheque, aside from rent? I feel like we’re the only ones still using them.
Even with external volumes, I don’t think there should be any mechanism where a container can escape a bind mount to affect the rest of the host fs? I use bind mounts all the time, far more than docker volumes.
99% of audience dozing off, 1% fascinated by the mystical art of antennas and radio waves. I know the science behind it, but I still don’t know how you guys came up with some of those designs.
As I said,
C/++ with renewed appreciation
No such thing as eval in non-interpreted languages. Unless you’re crazy enough to invoke the compiler and exec() the result.
I used eval too in my Perl days which is why I specifically called it out. IMO any time you see eval used there should be another, more proper way to do it.
These microplastics are digestible by your immune system, though, which makes them ultimately harmless. PLA is used for drug delivery for this reason.
Being concerned about incomplete PLA degradation is like being concerned about a piece of wood breaking down into micro-woods. Yet even if you get a dangerous shard of micro-wood embedded in your skin, your body can deal with this cellose polymer just fine.
Ultimately it will break down completely someday and in the meantime, nothing will be harmed.
I love the term “write-only code”, it’s perfect. I used to love Perl as it felt like it flowed straight from my brain into the keyboard. What a free and magical language.
So it turned out I had ADHD. Took meds, went back to C/++ with renewed appreciation, haven’t touched Perl since as it horrifies me to look at it. What a nightmare of dangling references and questionable typing. Any language that allows you to cast a string to a function and call it really needs to sit down and think about what it’s doing.
True survivalist/libertarian types have always loved solar power.
I don’t know how solar lost its space age coolness, though, aside from active lobbying from the fossil fuel industry to try to kill it. For awhile solar was undoubtedly the power source of the future, the same thing that was on our space probes and satellites.
I have old oil-crisis era books and magazines on my shelf which absolutely loved solar power and billed it as the cheap energy solution for the common man. Somewhere we went wrong, and I think it was Reagan (in many ways…)
Diverting attention from other nerdy/niche groups who don’t seem weird at all in comparison?
If you don’t want
memory-safebuffer overruns, don’t write C/C++.
Fixed further?
It’s perfectly possible to write C++ code that won’t fall prey to buffer overruns. C is a lot harder. However yes it’s far from memory safe, you can still do stupid things with pointers and freed memory if you want to.
I’ll admit as I grew up with C I still have a love for some of its oh so simple features like structs. For embedded work, give me a packed struct over complex serialization libraries any day.
I tend to write a hybrid of the two languages for my own projects, and I’ll be honest I’ve forgotten where exactly the line lies between them.
Even just for reporting issues, anyone who is capable of identifying a bug is likely to have a GitHub account. Not so for Gitlab or others.
Then you’ve got seamless integration with Vscode as a bonus, it’s more like why would you not use GitHub unless you have a specific problem with them.
A million tiny decisions can be just as damaging. In my limited experience with several different local and cloud models you have to review basically all output as it can confidently introduce small errors. Often code will compile and run, but it has small errors that can cause output to drift, or the aforementioned long-run overflow type errors.
Those are the errors that junior or lazy coders will never notice and walk away from, causing hard to diagnose failure down the road. And the code “looks fine” so reviewers would need to really go over it with a fine toothed comb, which only happens in critical industries.
I will only use AI to write comments and documentation blocks and to get jumping off points for algorithms I don’t keep in my head. (“Write a function to sort this array”) It’s better than stack exchange for that IMO.
I tried using AI tools to do some cleanup and refactoring of some legacy embedded C code and was curious if it could do any optimization or knew any clever algorithms.
It’s pretty good at figuring out the function of the code and adding comments, it did some decent refactoring of some sections to make them more readable.
It has no clue about how to work in a resource constrained environment or about the main concepts that separate embedded from everything else. Namely that it has to be able to run “forever”, operate in realtime on a constant flow of sensor data, and that nobody else is taking care of your memory management.
It even explained to me that we could do input filtering by using big arrays to do simple averaging on a device with only 1kB RAM, or use a long long for a never-reset accumulator without worrying about what will happen because “it will be years before it overflows”.
AI buddy, some of these units have run for decades without a power cycle. If lazy coders start dumping AI output into embedded systems the whole world is going to get a lot more glitchy.
Neat. That’s something I never even thought of. When typing in Arabic, does the cursor proceed from right to left, then?
Is this somehow handled with locales, are custom operating systems required, or is it really only handled by specific editors like word processors?
I’m trying to imagine how this would work at, say, a console bash prompt.
I learned so much at school, hacking crappy computers because I was bored. Boot disks in my backpack, hex editing the typing lesson saves, packing emulators and ROMs in one floppy at time and merging them back together (I even wrote a BASIC program for this because I didn’t know that tools existed to compress and chunk large files). And just exploratory hacking for fun, writing scripts and tools and stuff just to see if I could.
Chromebooks are the opposite of that, we bought our daughter a Chromebook and on realizing that it was only a tablet with a keyboard it went back to the store. She has my old Linux desktop now and knows a lot more than her friends