• 0 Posts
  • 24 Comments
Joined 1 year ago
cake
Cake day: June 29th, 2023

  • For me, I view Apollo as the highschool quarterback winning the homecoming game.

    In the context, its a great achievement. A lot of time, effort, and luck all came together at just the right moment to create an entertaining spectacle. The school is all happy and celebrating, students will remember that moment for years to come. But in the grand scheme of things, it’s not that big of an achievement since everyone there will move on to bigger and greater things, except they won’t have a student body cheering them on.

    I think saying the Apollo program is one of the greatest achievements of mankind falsely puts it on a pedestal and forever sets up all other achievements as being lesser. Makes us all feel like anything that isn’t chasing that glory isn’t worth it. It’s an achievement for sure, but not the biggest. If I had to give the greatest achievement in space technology to anything, I’d give it to either GPS or GOES.


  • Short answer: it’s not that we don’t have the technology, its that we don’t have a reason to. With very few exceptions, if you can do it on the moon you can do it on earth or in Earth orbit

    Long answer: in the space industry/field the moon is incredibly boring, relatively expensive to get to, and adds an extra step of logistics to an already complicated mission profile. Most space related technology advancement efforts have gone into doing things in orbit and there is more to do there than on the moon, it’s logistically simpler, and cost is orders of magnitude less. Stuff is still advancing there, think Hubble vs James Web, GPS 1 vs GPS 3, the entire GOES system. In terms of technical challenges, they’re far more interesting than anything on the moon, but it’s not as flashy/headline grabbing so it’s not talked about much.

    The US going to the moon in the 60/70s was a rare combination of a win for scientists, politicians, and the people. The political incentive went away since as the USSR space program collapsed so too did political pressure to continue to put men on the moon and “prove 'Murica is better than those damn commies”.

    In modern times the political incentive is returning with the continued efforts by China to do more stuff in space so we get the Artemis program, but the incentives aren’t that strong which is why the program has moved so slowly.


  • MajorasMaskForevertoSelfhosted@lemmy.worldWhy self host a password manager?English
    arrow-up
    1
    arrow-down
    0
    ·
    7 days ago
    edit-2
    7 days ago
    link
    fedilink

    To me 16 is long haha.

    I usually end up running with 16 characters since a lot of services reject longer than 20 and as a programmer I just like it when things are a power of two. Back in the Dark Times of remembering passwords my longest was 13 characters so when I started using a password manager setting them that long felt wild to me.

    I do have my bank accounts under a 64 character password purely because monkey brain like seeing big security rating in keepass. Entropy go brrrrrrrrrrrr


  • I’ve used cloud based services for password managers for work and “self host” my personal stuff. I barely consider it self hosting since I use Keepass and on every machine it’s configured to keep a local cached copy of the database but primarily to pull from the database file on my in-home NAS.

    Two issues I’ve had:

    Logging into an account on a device currently not on my home network is brutal. I often resort to simply viewing the needed password and painstakingly type it in (and I run with loooooong passwords)

    If I add or change a password on a desktop and don’t sync my phone before I leave, I get locked out of accounts. Two years rocking this setup it’s happened three times, twice I just said meh I don’t really need to do this now, a third time I went through account recovery and set a new password from my phone.

    Minor complaint:

    Sometimes Keepass2Android gets stuck trying to open the remote database and I have to let it sit and timeout (5 minutes!!!) which gets really annoying but happens very infrequently which is why I say just minor complaint

    All in all, I find the inconvenience of doing the personal setup so low that to me even a $10 annual subscription is not worth it



  • Combination of anti large company sentiment + people feeling entitled to get things for free if I had to guess. It also usually feels wrong when a corporation threatens a lawsuit over a single person since the US court system heavily favors the person with more money and it’s probably a true statement to say that Nintendo has more resources than the lead dev.

    Modern Vintage Gamer on YouTube had an interesting take in that by stifling emulator development now it will hurt the industry in the long run because Switch exclusives will become increasingly difficult to play once support ends (an argument I myself don’t find all that compelling)

    Nerrel on YouTube has a well put together and researched video on emulation where at least in the US it’s been tested in court several times that emulators are legal, but obtaining the code for the emulators to run is almost always not since you usually have to make a copy and that violates the publisher’s right to copy


  • Ironically enough Aurora city water consistently wins awards for it’s quality lol.

    I think the legitimate reason is that Aurora is a physically massive city, has lower housing costs than the rest of the metro area, and Denver has a habit of forcing its homeless population out and into Aurora. The police department is also an absolute good ole boys club who are all terrified of city residents to the point where they drive unmarked/undercover vehicles by default (at least it seems that way, I see so few marked police cars but whenever there’s a collection of cop cars with lights going the majority are the undercover)

    Sauce: Current Aurora, CO resident. It’s not all bad


  • Embedded systems run into this a lot, especially on low level communication busses. It’s pretty common to have a comm bus architecture where there is just one device that is supposed to be in control of both the communication happening on the bus and what the other devices are actually doing. SPI and I2C are both examples of this, but both of those busses have architectures where there isn’t one single controller or that the devices have some other way to arbitrate who is talking on the bus. It’s functionally useful to have a term to differentiate between the two.

    I’ve seen Master/Servant used before which in my experience just trips people up and doesn’t really address the cultural reason for not using the terms.

    Personally I’m a fan of MIL-STD-1553 terminology, Bus Controller and Remote Terminal, but the letters M and S are heavily baked into so much literature and designs at this point (eg MISO and MOSI) that entirely swapping them out will be costly and so few people will do it, so it sticks around



  • For graphics, the problem to be solved is that the N64 compiled code is expecting that if it puts value X at memory address Y it will draw a particular pixel in a particular way.

    Emulators solve this problem by having a virtual CPU execute the game code (kinda difficult), and then emulator code reads the virtual memory space the game code is interacting with (easy), interprets those values (stupid crazy hard), and replicates the graphical effects using custom code/modern graphics API (kinda difficult).

    This program is decompiling the N64 code (easy), searches for known function calls that interact with the N64 GPU (easy), swaps them with known valid modern graphics API calls (easy), then compiles for local machine (easy). Knowing what function signatures to look for and what to replace them with in the general case is basically downright impossible, but because a lot of N64 games used common code, if you go through the laborious process for one game, you get a bunch extra for free or way less effort.

    As one of my favorite engineering phrases goes: the devil is in the details


  • MajorasMaskForevertoProgramming@programming.dev...English
    arrow-up
    14
    arrow-down
    2
    ·
    6 months ago
    link
    fedilink

    Ada

    It has a lot of really nice features for creating data types and has amazing static analysis during compile time.

    But all the tooling around it is absolute crap making using the language unbearable and truly awful. If it had better tooling I could see that it would have taken a decent chunk of development away from C and C++


  • The ham radio thing makes me so sad, it really does seem like a dying hobby. But when I took my test the club sponsoring it had guys there who immediately berated me for using a practice test guide and getting a cheap piece of crap radio. Like yeah, I know it’s a terrible radio, but it was $70 with the practice guide and I’m a poor af college student. That little radio lasted me years and I only bought a new one cause it’s battery died and I couldn’t find a replacement


  • Yes and no.

    Chess bots (like Stockfish) are trained on game samples, with the goal of predicting what search path to keep looking at and which moves will result in a win. You get game samples by playing the game, so it made sense to have stockfish play itself, since the input was always still generated by the rules of chess.

    If a classifier or predictive model creates it’s own data without tying it to the rules and methods in reality, they’re going to become increasingly divorced from reality. If I had to guess, that’s what the guy in the article is referencing when talking about “sanitizing” the data. Some problems, like chess, are really easy. Mimicking human speech? Probably not


  • MajorasMaskForevertoProgramming@programming.devC++ creator rebuts White House warningEnglish
    arrow-up
    22
    arrow-down
    1
    ·
    7 months ago
    edit-2
    7 months ago
    link
    fedilink

    As someone who is in the aerospace industry and has dealt with safety critical code with NASA oversight, it’s a little disingenuous to pin NASA’s coding standards entirely on attempting to make things memory safe. It’s part of it, yeah, but it’s a very small part. There are a ton of other things that NASA is trying to protect for.

    Plus, Rust doesn’t solve the underlying problem that NASA is looking to prevent in banning the C++ standard library. Part of it is DO-178 compliance (or lack thereof) the other part is that dynamic memory has the potential to cause all sorts of problems on resource constrained embedded systems. Statically analyzing dynamic memory usage is virtually impossible, testing for it gets cost prohibitive real quick, it’s just easier to blanket statement ban the STL.

    Also, writing memory safe code honestly isn’t that hard. It just requires a different approach to problem solving, that just like any other design pattern, once you learn and get used to it, is easy.



  • So many people forget that while they understand how to use a Linux terminal and how Linux on a high level works, not everyone does. Plus, learning all of that takes time, effort, and tenacity, which not everyone is willing to do. Linus’s whole conclusion was that as long as that learning curve exists and as long as it’s that easy to shoot yourself in the foot, Linux desktop just isn’t viable for a lot of people.

    But Linus has done a lot of public fuck ups therefore everything he says must be inherently wrong.


  • Whenever I replay OOT I never have a problem with Navi. She rarely hard interrupts, usually just a short tone and flashing C button that goes away after a few seconds. The voice lines only trigger if you press the button to call her, in most cases the hints she gives are genuinely helpful, and stays out of your way for the vast majority of the game.

    Fi from skyward sword though Far worse because she does interrupt gameplay, often repeats what the last dialogue box just fucking told you, and takes several dialogue boxes to tell you what Navi would have taken one to do. I’m glad they significantly overhauled her interactions in the HD release but I’m still going to be hesitant to play that game again


  • I think part of the “what do I do with this” factor for the iPad was that Apple (and other companies still to this day) were so hell bent on making everything smaller and more compact that releasing a larger product was marketing whiplash. Not to mention that smartphones were being pitched as this “do everything device” so why would you need anything else?

    After you get over that marketing sugarcoating, it becomes pretty obvious what you’d use an iPad for. Internet and media consumption at a larger scale than your phone, easier on your eyes than a phone, but retains at least some of the lightweight smaller form factor that separates it from a regular laptop. Sure you didn’t have the stick it in your pocket advantage of a phone or the full keyboard and computational power of a laptop, but there was this in-between that for a modest fee, you could have the conveniences if you can live with/ignore the sacrifices.


  • I don’t think the MacBook Airs launch is a good comparison.

    Sure there was an early adopter tax on being one of the first “thin and light” laptops, but people already know what you can use a MacBook for, there was already a large value proposition in having a MacBook, the extra cost was entirely being more portable than it’s full size counterparts. Everything you can do on a Mac, just way easier to take on the go.

    I’ve read a few reviews on it, watched MKBHD’s initial review, and outside of a few demo apps they point to the vision pro having no real point to it. Which if true, then it falls in line with existing VR headsets that are a fraction of it’s cost and in a niche market, being three times the cost of your competitors is not a good position to be