• 0 Posts
Joined vor 2 Jahren
Cake day: Jan. 21, 2021


I’m curious how this works from a Windows host. Does it transfer the windows version and play it on wine? (Even if there is a Linux build available.) Or does it transfer the shared assets but download the difference?

GitLab CI is pretty acceptable if you are using GitLab (and even ok if you aren’t). I wouldn’t say it is fantastic but it does the job with little pain.

I’ve been using Wayland every day for years with no issue (GNOME 3 on AMD graphics).

And the best part is that my OS isn’t actively hostile towards me (Windows) or at least want to control what I do with my computer (macOS).

That seems like just a few empty repos? I don’t see any info there.

as they having a crawling bot

That is cool. Do you have any more info about that? I can’t see it mentioned on their website.

So you own your own blog and content. You can host your blog in a Wordpress site, GitHub pages, Ghost, or wherever you want.

Sounds like it just pulls feeds from anywhere. Nothing specific about GitHub at all.

This is like saying that Google is a search engine for GitHub blogs 🤷 I mean it is, but it is also a much more general tool.

What does diff.blog have to do with GitHub?

Honestly I think a fresh coat of paint is what K9 needs most. The recent swipe gestures both to navigate between messages and in the message list have been fantastic. But really working thorough the UX on component at a time will be a dramatic improvement to K9.

For example the folder classes UI is both too complicated to do simple things and impossible to do more complicated things. The compose window is OK but can use a cleanup. The search UX is pretty awkward (and buggy). I’m glad to see the message window improve as well. The fact that there is currently no way to see both the name and address of the sender is very annoying. I need to pop up the “Show Headers” option way too often. I’d also really appreciate more powerful options for remote content in messages. The current On/Contacts Only/Off is too simple for my taste.

I think this mockup shows understanding of the current design and what features are valuable and missing. Note that the mockup also has very long subjects and similar so this is the worst-case space usage. I’m sure it will also be refined a bit more before being shipped.

New things are always scary and carry some risk, but I’m personally quite optimistic.

I don’t know if I see that as a technicality. I see that as an important aspect of how abolishing copyright would work. I’m curious how this would be managed, is there a new law that all non-personal information is to be made public and freely available?

To me abolishing copyright and making all information public are very different things although obviously have some similarities.

Copyleft can also attempt to avoid keeping software secret. Abolishing copyright would just make proprietary software into a trade secret while now being able to use GPL or AGPL code freely.

FOSS is a copyright hack with the ultimate goal to abolish copyright

I don’t think this is a universal opinion. Otherwise copyleft and attribution licenses wouldn’t be used. It is clear that some people see value in having some control over their software.

Note that it isn’t the algorithm that is copyrighted. Algorithms are not copyrightable IIUC. It is the way the code is written that is “art” and copywritable. If this code was actually re-written using the same algorithm it would be fine. Much like you can own a recipe text but not the actual ingredients and steps of a recipe itself.

Of course you can still disagree. But I think that software is a creative endeavor and I think it is beneficial to provide some control to the author.

I do agree that software patents are generally harmful. There would maybe be some value to encouraging development and sharing of algorithms or techniques but I think the time frame would need to be much shorter (5y max maybe?) and in practice we have seen that most usage of software patents are not valuable to society and many software innovations are released in research journals for free anyways, so the best option is probably just to scrap the idea.

There are examples of it outputting entire complex algorithms that are definitely copyrightable and reasonable to be copyrighted. A recent example is https://twitter.com/docsparse/status/1581461734665367554.

I think copyright can be absurd, and I think it needs to be cut back in a lot of ways. But I think some amount of copyright makes sense and GitHub Copilot sometimes violates what I see as morally correct.

Personally I don’t have any problem with it being trained on copyrighted code. I also think that much of the code produced by GitHub Copilot is “original” and free from copyright. However there are many examples of cases where it spits out verbatim or near-identical copies of copyrighted code. It is clear to me that the code in these cases is still owned by the original owner.

It is identical to human learning. I can read and learn from copyrighted code and write my own code with that newfound knowledge. However if I memorize and re-write code it doesn’t magically make it mine.

I don’t know about illegal but they should be forced to prominently advertise their security update lifetime. Sort of like energy labels are put onto household appliances or nutrition labels are put onto food.

Yes, you need to download all transitive dependencies.

But this isn’t dependency hell, it is just tedious. Dependency Hell is when your dependency tree requires two (or more) version of a single package so that not all of the dependencies can be satisfied.

I don’t remember that working but I haven’t used Debian in years so it could be.

apt is the tool for downloading packages. So if you don’t have internet access apt won’t be very useful.

The command to install packages on debian is dpkg. So if you download a Debian package (usually named *.deb) you can install it with dpkg -i $pkg as long as you have the dependencies installed. Of course you can also install the dependencies this way, so just make sure that you bring the package and all packages that it depends on to the target machine.

I’m surprised. I thought it was much higher. Although this is up 38% (from last year I assume?) which is sad and not surprising.

2.3MiB is an incredible amount of data and what a waste to download for the average webpage.

Small nit in the page: the chart says KB but the underlying expression reveals that it is actually KiB as most people would expect.

What happened to bookmarklets and user scripts? I must have missed the boat because I’m still using them.

I keep all of my user scripts at https://userscripts.kevincox.ca/ and use a few made by others.

I only have one bookmarklet, it generates email addresses for me and injects them into a form. But I use it frequently.

I do agree that most people have moved to browser extensions. It is a shame that browsers didn’t just integrate and improve user scripts. You could imagine that WebExtensions could have just been extra APIs instead of a big browser-integrated thing with a mandatory app store. I think this is a common problem that we keep running into, centralized stores rather than decentralized installation methods.

I use an RSS-to-Email service to send updates to me. I then filter them into folders such as Not Important and Videos for me to read when I have some downtime. (And a few feeds go to my Inbox for fast action).