Alterslash

the unofficial Slashdot digest
 

Contents

  1. China Launches World’s Largest Electric Container Ship
  2. Satellite Operator SES Acquiring Intelsat In $3.1 Billion Deal
  3. America’s Wind Power Production Drops For the First Time In 25 Years
  4. Is Self Hosting Going Mainstream?
  5. 13.4 Million Kaiser Insurance Members Affected by Data Leak to Online Advertisers
  6. Google Removes RISC-V Support From Android Common Kernel, Denies Abandoning Its Efforts
  7. Dave & Buster’s To Allow Customers To Bet On Arcade Games
  8. Systemd Announces ‘run0’ Sudo Alternative
  9. Binance Founder Changpeng Zhao Sentenced To 4 Months In Prison
  10. Bruce Perens Emits Draft Post-Open Zero Cost License
  11. Change Healthcare Hackers Broke In Using Stolen Credentials, No MFA
  12. Extreme Heat Continues To Scorch Large Parts of Asia
  13. Supreme Court Declines To Block Texas Porn Restriction
  14. How an Empty S3 Bucket Can Make Your AWS Bill Explode
  15. Biden Administration Moves To Speed Up Permits for Clean Energy

Alterslash picks up to the best 5 comments from each of the day’s Slashdot stories, and presents them on a single page for easy reading.

China Launches World’s Largest Electric Container Ship

Posted by BeauHD View on SlashDot Skip
AmiMoJo shares a report from Tech Times:
China has reached a major landmark in green transportation with the launch of the world’s largest fully electric container ship. Developed and manufactured by China Ocean Shipping Group (Cosco), the vessel is now operating a regular service route between Shanghai and Nanjing, aiming to reduce emissions significantly along its journey. The Greenwater 01, an all-electric container ship, is positioning itself to be a shipping industry pioneer. Equipped with a main battery exceeding 50,000 kilowatt-hours, the vessel can accommodate additional battery boxes for longer voyages. These battery boxes, each containing 1,600 kilowatt-hours of electricity and similar in size to standard 20-foot containers, provide flexibility in extending the ship’s travel range. With 24 battery boxes onboard, the Greenwater 01 can complete a journey consuming 80,000 kilowatt-hours of electricity. This is equivalent to saving 15 tons of fuel compared to a standard container ship, highlighting the efficiency of electric propulsion systems.
According to Cosco, the vessel can reduce CO2 emissions by 2,918 tons per year, which is equivalent to taking 2,035 family cars off the road or planting 160,000 trees.

Actual range?

By jo7hs2 • Score: 4, Insightful Thread
Soanybody seen an article that says how FAR it can travel, and not merely “a journey consuming 80,000 kilowatt-hours of electricity?”

Re:Offset?

By AmiMoJo • Score: 5, Informative Thread

LFP batteries are around 55 kgCO2eq/kWh for manufacturing, so 2750 tonnes for the 50MWh battery pack. Or less than a year of emissions by an equivalent fossil fuel boat.

For charging them, China has more wind power installed than the rest of the world combined, and more solar power installed than the rest of the world combined. In 2023 they also installed more wind and solar power than the rest of the world combined, by quite some margin. As I’m sure Windborn would like to point out, they have nuclear too.

This is a big deal for more than just the reduced emissions though. The technology will be exported and is an important one for cleaning up global shipping, which is responsible for around 3% of global GHG emissions.

Re:Offset?

By quonset • Score: 5, Insightful Thread

Does this take into account the co2 produced during manufacturing of the batteries and or the energy used to charge them?

Now ask those same questions for gas/diesel vehicles. How much CO2 is produced simply drilling for the oil? How much to transport it? How much to refine it? How much to deliver the gas to stations? And finally, how much once it’s burned?

There is no such thing as a free ride. The best you can do is reduce.

Re:Offset?

By crackerjack155 • Score: 4 Thread

They probably are not including the manufacturing or charging, but it is still going to be a lot better than a regular ship.

Electric vehicles end up being much better than fossil fuel vehicles even counting the total lifecycle costs and electricity generation .

Even without recycling shorter lasting and environmentally worse NMC batteries and charging them from a coal plant, ends up being better for the environment and releasing less CO2. A big study done like a decade ago on cars found the average break even point would be about 80,000 miles for a single use NMC battery charged with a coal power plant. It was under 20,000 miles for the average US grid generation, which coal is a relatively small portion of.

With LFP batteries which last a lot longer and are less environmentally damaging to make, it is even better. Especially combined with recycling, and there are already recycle a fair amount of batteries, and they are building a lot of new factories for recycling even more. The breakeven point comes even sooner.

Oil/gas/coal has an insane amount of emissions and environmental damage, not just from burning it, but also from getting it, refining it, transporting it, and dealing with all the byproducts. Getting/Refining/Transporting gasoline uses a fair amount of electricity and a very large amount of heat/pressure usually from burning natural gas, the electricity to make the gasoline for the average car has been found to be between like 1/6 and 1/2 the electricity needed to drive a similar BEV the same distance. If the natural gas used in the process was instead put through a CSP plant, and used to charge a BEV, it would go farther than using that to help refine the gasoline.

The average large coal power plant has less CO2 emissions for unit of energy produced than a gas/diesel car engine. You need to make a very large amount of compromises to the efficiency of a heat engine to put it in a car, because it needs to startup/shutdown very quickly, be able to not only change speed/torque very quickly but over a relatively large range, they don’t run very long, they need to fit in a car, the speed/torque output needs to in the range for the transmission, and a bunch of other things. A large baseload power plant like coal, nuclear, CSP NG plants can take hours to startup/shutdown and once they are running they maintain the exact same speed/torque output 24/7.

Re:Actual range?

By quonset • Score: 4, Informative Thread

Soanybody seen an article that says how FAR it can travel, and not merely “a journey consuming 80,000 kilowatt-hours of electricity?”

The article itself says the trip is between Shanghai and Nanjing. According to this calculator, that distance is 195 nautical miles.

Satellite Operator SES Acquiring Intelsat In $3.1 Billion Deal

Posted by BeauHD View on SlashDot Skip
Satellite operator SES plans to buy fellow satellite operator Intelsat, in a $3.1 billion deal that’s expected to close next year. According to Space Magazine, the combined company could help it “compete with SpaceX’s huge Starlink broadband network.” From the report:
SES and Intelsat both operate communications satellites in geostationary orbit, which lies 22,236 miles (35,785 kilometers) above Earth. SES also runs a constellation called O3b in medium Earth orbit, at an altitude of about 5,000 miles (8,000 km). As [SES CEO Adel Al-Saleh] noted, there is increasingly fierce competition for the services provided by these satellites — for example, from SpaceX’s Starlink megaconstellation in low Earth orbit. And other LEO megaconstellations are in the works as well. For instance, Amazon launched the first two prototypes for its planned 3,200-satellite Project Kuiper network this past October.

“By combining our financial strength and world-class team with that of SES, we create a more competitive, growth-oriented solutions provider in an industry going through disruptive change,” Intelsat CEO David Wajsgras said in the same statement. “The combined company will be positioned to meet customers’ needs around the world and exceed their expectations,” he added.

Uh good luck

By backslashdot • Score: 3 Thread

How are they going to get the satellites up there? SpaceX is the only provider who can do it cheaply AND has the launch cadence. Plus, Starlink has a 5+ year head start. Meaning if SES started work tomorrow, their first launch would be two or three years from now and then add another 5 to 7 years before their constellation is in place in any position to compete with Starlink — by which time Starlink will be onto its Gen 2.

Re:Uh good luck

By AmiMoJo • Score: 4, Informative Thread

I think Starlink is a red herring, probably just the journalist writing about the only other big constellation they know or something.

They will likely provide other services, like 5G, IoT comms (LoRA etc.), and other stuff that Starlink isn’t really suited for. The Starlink broadband transceivers are big and very power hungry, no good for sensors, transponders, handsets, and the like. They have announced “direct to cell” 4G service, but they don’t have a big head start on that and it remains to be seen how well it will work, as it’s more of an add-on for them, not the primary purpose of the satellite.

The other issue that TFA fails to mention is that it’s getting crowded up there, and we are now looking at disposing of thousands, maybe tens of thousands, of satellites by burning them in the upper atmosphere, every single year.

America’s Wind Power Production Drops For the First Time In 25 Years

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from Bloomberg:
U.S. wind power slipped last year for the first time in a quarter-century due to weaker-than-normal Midwest breezes, underscoring the challenge of integrating volatile renewable energy sources into the grid. Power produced by turbines slipped 2% in 2023, even after developers added 6.2 gigawatts of new capacity, according to a government report Tuesday. The capacity factor for the country’s wind fleet — how much energy it’s actually generating versus its maximum possible output — declined to an eight-year low of 33.5%. Most of that decline was driven by the central US, a region densely dotted with turbines.

Wind is a key component of the effort to cut carbon emissions, but the data highlights the downside of relying on intermittent energy sources tied to the effects of global weather. Last year’s low wind speeds came during El Nino, a warming of the equatorial Pacific that tends to weaken trade winds. La Nina, the Pacific cooling pattern that dominated in 2022 and is poised to return later this year, usually has the opposite effect.
The U.S. Energy Information Administration shared the findings in a report published earlier today.

Oh for fucks sake

By rsilvergun • Score: 4, Insightful Thread
It’s 2%. Yes from Wall Street standpoint that’s the end of the world and we should just shut everything down and kill everyone and everything. But from normal human standpoints you just build out a little bit of extra capacity. It’s just kind of gross how endless growth has taken over every aspect of our civilization because we are continuously running away from the giant monstrous dragon that is Wall Street hoping the dragon eats The Hobbit before us

Gotta say, the headline surprised me

By 93 Escort Wagon • Score: 3 Thread

What with this being an election year and all.

All the best sites are gone

By ishmaelflood • Score: 4, Interesting Thread

So all the best sites were built on early on using outdated tech. Now the new sites that are being built on are less suitable so the capacity factor for the whole fleet drops.

Re:Oh for fucks sake

By CaptQuark • Score: 4, Informative Thread

Yeah, the article made it seem like the 33.5% was a notable decrease (lowest in 8 years) but in reality it was a only a small fluctuation.

2014 34.0
2015 32.2
2016 34.5
2017 34.6
2018 34.6
2019 34.4
2020 35.3
2021 34.4
2022 35.9
2023 33.5

Meanwhile

By Eunomion • Score: 4, Interesting Thread
Bullshit. And I’m betting the “lulls” keep happening right at peaker plant max profit point. Probably scheduling windmill maintenance right for max profitability.

Is Self Hosting Going Mainstream?

Posted by Slashdot Staff View on SlashDot Skip
An anonymous reader shares that IPv6rs has debuted a new one-click self hosting system:
Everyone seemed like they were talking about self hosting, but we didn’t understand why it wasn’t more prolific. Thus, we conducted a survey to hear reasons. It turned out the two most common reasons were:

1. Lack of an external IP address 2. Too difficult to setup and maintain

Our service already solves the first issue. We set out with a self-hostathon to figure out what the blockers were in setting up and running a self-hosted server.
… writes IPv6rs on their blog.
We needed to make things easier, so we created Cloud Seeder, a one click installer that instantly launches a fully encapsulated server appliance that is externally reachable.

At the time of launching, the current version of Cloud Seeder supports 20+ different appliances - from Mastodon which federates with Meta’s Threads to Nextcloud which provides an enterprise-level, self-hosted alternative to the big-name collaboration suites.

It also automatically handles updates/maintenance.

We hope this will bring a new era to self hosting and, in turn, will bring the decentralized internet forest back.
Is the self hosting era making its return?

Re: I don’t think that means what you think it mea

By Junta • Score: 5, Interesting Thread

It looks like they are truly describing hosting yourself, with optional ipv6 tunnel provider for those stuck behind NAT. Admittedly more external dependency than is ideal, but unavoidable if the isp grants no sort of external address, or filters traffic to make it infeasible. A tunnel at least means you can be sure the “meat” of the service is under your control at least.

Real self-hosting

By Baron_Yam • Score: 5, Informative Thread

It isn’t (just) about where the server resides, it’s about control. Someone else’s nearly fully-managed system running on your hardware using your Internet connection is not full self-hosting.

It’s the worst of both worlds.

Re:Self-hosting never left, but…

By libra-dragon • Score: 5, Informative Thread

This service is intended to solve those concerns. It’s a Wireguard VPN service. You create a reverse tunnel to their PoP and receive a public/external IPv6 address. The only technical control that would block this would be if your ISP blocks outbound connections to the service’s Wireguard listener port (default UDP/51820).

Re:Trust problem

By PuddleBoy • Score: 5, Informative Thread

"…and outgoing email from my ip is rejected.”

If gmail has been rejecting your email, then be sure to read their requirements for a txt (SPF) record;

https://support.google.com/a/a…

example; v=spf1 ip4:10.10.10.0/29 include:_spf.google.com ~all

Why isn’t this marked “Advertisement”

By dmomo • Score: 5, Insightful Thread

Because it smells like an ad, and reads like an ad.

13.4 Million Kaiser Insurance Members Affected by Data Leak to Online Advertisers

Posted by BeauHD View on SlashDot Skip
Kaiser Permanente is the latest healthcare giant to report a data breach. Kaiser said 13.4 million current and former insurance members had their patient data shared with third-party advertisers, thanks to an improperly implemented tracking code the company used to see how its members navigated through its websites. Dark Reading reports:
The shared data included names, IP addresses, what pages people visited, whether they were actively signed in, and even the search terms they used when visiting the company’s online health encyclopedia. Kaiser has reportedly removed the tracking code from its sites, and while the incident wasn’t a hacking event, the breach is still concerning from a security perspective, according to Narayana Pappu, CEO at Zendata.

“The presence of third-party trackers belonging to advertisers, and the oversharing of customer information with these trackers, is a pervasive problem in both health tech and government space,” he explains. “Once shared, advertisers have used this information to target ads at users for complementary products (based on health data); this has happened multiple times in the past few years, including at Goodrx. Although this does not fit the traditional definition of a data breach, it essentially results in the same outcome — an entity and the use case the data was not intended for has access to it. There is usually no monitoring/auditing process to identify and prevent the issue.”

No tool ehh?

By skogs • Score: 3 Thread

“no monitoring/auditing process to identify and prevent the issue”
Yes, yes there is. It is called due diligence and intelligent decision making. Unfortunately for Kaiser they never considered the idea that tracking what people were searching for and looking at people clicking links might be a bad idea to share with their optimization partners.
There is no automated tool to prevent this issue…because this is a human stupidity issue. They didn’t understand what they were collecting nor where it was going. Which is understandable if you’re a nitwit web designer. That is why there is supposed to be intelligent management of both the data itself and the network systems involved.
Their top two executives should spend a year in jail.
A few more instances like this and some of these people might understand accountability.

Too much Junk in the Trunk

By Tablizer • Score: 4, Informative Thread

I’m a Kaiser member, and there is way too much JavaScript and unnecessary layers in their crazy site. Many simple browser and HTML widget actions simply don’t work because an intermediate JS layer re-translates keyboard and mouse actions to something internal, it appears. They are reinventing a browser in a browser.

And it’s slow to render, with stuff bouncing around as various panels incrementally load and change the layout and flow. Thus, you often click on the wrong thing if you don’t wait at least about 5 seconds.

Kaiser’s IT team needs to go to KISS Bootcamp. Or stop renting outsourcers who throw layers at a problem instead of do it right.

Google Removes RISC-V Support From Android Common Kernel, Denies Abandoning Its Efforts

Posted by BeauHD View on SlashDot Skip
Mishaal Rahman reports via Android Authority:
Earlier today, a Senior Staff Software Engineer at Google who, according to their LinkedIn, leads the Android Systems Team and works on Android’s Linux kernel fork, submitted a series of patches to AOSP that "remove ACK’s support for riscv64.” The description of these patches states that “support for risc64 GKI kernels is discontinued.”

ACK stands for Android Common Kernel and refers to the downstream branches of the official kernel.org Linux kernels that Google maintains. The ACK is basically Linux plus some “patches of interest to the Android community that haven’t been merged into mainline or Long Term Supported (LTS) kernels.” There are multiple ACK branches, including android-mainline, which is the primary development branch that is forked into “GKI” kernel branches that correspond to a particular combination of supported Linux kernel and Android OS version. GKI stands for Generic Kernel Image and refers to a kernel that’s built from one of these branches. Every certified Android device ships with a kernel based on one of these GKI branches, as Google currently does not certify Android devices that ship with a mainline Linux kernel build.

Since these patches remove RISC-V kernel support, RISC-V kernel build support, and RISC-V emulator support, any companies looking to compile a RISC-V build of Android right now would need to create and maintain their own fork of Linux with the requisite ACK and RISC-V patches. Given that Google currently only certifies Android builds that ship with a GKI kernel built from an ACK branch, that means we likely won’t see certified builds of Android on RISC-V hardware anytime soon. Our initial interpretation of these patches was that Google was preparing to kill off RISC-V support in Android since that was the most obvious conclusion. However, a spokesperson for Google told us this: “Android will continue to support RISC-V. Due to the rapid rate of iteration, we are not ready to provide a single supported image for all vendors. This particular series of patches removes RISC-V support from the Android Generic Kernel Image (GKI).”
Based on Google’s statement, Rahman suggests that “there’s still a ton of work that needs to be done before Android is ready for RISC-V.”
“Even once it’s ready, Google will need to redo the work to add RISC-V support in the kernel anyway. At the very least, Google’s decision likely means that we might need to wait even longer than expected to see commercial Android devices running on a RISC-V chip.”

China

By Hecatonchires • Score: 4, Interesting Thread

Is this due to the recent reports of China all in on RISC-V?

Dave & Buster’s To Allow Customers To Bet On Arcade Games

Posted by BeauHD View on SlashDot Skip
Arcade giant Dave & Buster’s said it will begin allowing customers to bet on arcade games. “Customers can soon make a friendly $5 wager on a Hot Shots basketball game, a bet on a Skee-Ball competition or on another arcade game,” reports CNBC. “The betting function, expected to launch in the next few months, will work through the company’s app.” From the report:
Dave & Buster’s, started in 1982, now has more than 222 venues in North America, offering everything from bowling to laser tag, plus virtual reality. The company says it has five million loyalty members and 30 million unique visitors to its locations each year. The company’s stock is up more than 50% over the past year. As a boom in betting increases engagement among sports fans, digital gamification could have a similar effect within Dave & Buster’s customer base by allowing loyalty members to compete with one another and earn rewards. Ultimately, it could mean people spend more time and money at the venues.

Dave and Buster’s is using technology by gamification software company Lucra. […] Lucra and Dave & Buster’s said there will be a limit placed on the size of bets it will allow, but that they’re not publicly disclosing that threshold just yet. Lucra said across its history the average bet size has been $10. “We’re creating a new form of kind of a digital experience for folks inside of these ecosystems,” said Madding, Lucra’s chief operating officer. “We’re getting them to engage in a new way and spend more time and money,” he added. Lucra says its skills-based games are not subject to the same licenses and regulations gambling operators face with games of chance. Lucra is careful not to use the term “bet” or “wager” to describe its games. “We use real-money contests or challenges,” Madding said. Lucra’s contests are only available to players age 18 and older. The contests are available in 44 states.

Systemd Announces ‘run0’ Sudo Alternative

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from Foss Outpost:
Systemd lead developer Lennart Poettering has posted on Mastodon about their upcoming v256 release of Systemd, which is expected to include a sudo replacement called “run0". The developer talks about the weaknesses of sudo, and how it has a large possible attack surface. For example, sudo supports network access, LDAP configurations, other types of plugins, and much more. But most importantly, its SUID binary provides a large attack service according to Lennart: “I personally think that the biggest problem with sudo is the fact it’s a SUID binary though — the big attack surface, the plugins, network access and so on that come after it it just make the key problem worse, but are not in themselves the main issue with sudo. SUID processes are weird concepts: they are invoked by unprivileged code and inherit the execution context intended for and controlled by unprivileged code. By execution context I mean the myriad of properties that a process has on Linux these days, from environment variables, process scheduling properties, cgroup assignments, security contexts, file descriptors passed, and so on and so on.”

He’s saying that sudo is a Unix concept from many decades ago, and a better privilege escalation system should be in place for 2024 security standards: “So, in my ideal world, we’d have an OS entirely without SUID. Let’s throw out the concept of SUID on the dump of UNIX’ bad ideas. An execution context for privileged code that is half under the control of unprivileged code and that needs careful manual clean-up is just not how security engineering should be done in 2024 anymore.” […]

He also mentioned that there will be more features in run0 that are not just related to the security backend such as: “The tool is also a lot more fun to use than sudo. For example, by default, it will tint your terminal background in a reddish tone while you are operating with elevated privileges. That is supposed to act as a friendly reminder that you haven’t given up the privileges yet, and marks the output of all commands that ran with privileges appropriately. It also inserts a red dot (unicode ftw) in the window title while you operate with privileges, and drops it afterwards.”

Re:improvement?

By OngelooflijkHaribo • Score: 5, Insightful Thread

All the same issues that sudo has will also be in whatever part of systemd checks whether it can execute the command or not and the former has a better track record with security

Also:

But the bellyaching greybeard retards here will continue to get pwned because they refused to ever learn anything new after their brain got old and now they’re hopelessly behind.

This is such a silly argument. Almost all of the people who rejected systemd use other things that were developed around the same time or even later in many cases. They’re generally using OpenRC, Runit, Dinit, or S6, all of which being about as new as systemd.

Re:Really?

By HBI • Score: 5, Insightful Thread

Seems more like a monument to the store of goodwill that Lennart has built up over the years.

Re:SystemdOS

By multi io • Score: 5, Funny Thread
It lacks a good editor.

Re: As long as sudo still works …

By BerkeleyDude • Score: 5, Funny Thread
I’d just like to interject for a moment. What you’re refering to as SystemD-OS, is in fact, GNU/SystemD-OS, or as I’ve recently taken to calling it, GNU plus SystemD-OS. SystemD-OS is not an operating system unto itself, but rather another free component of a fully functioning GNU system made useful by the GNU corelibs, shell utilities and vital system components comprising a full OS as defined by POSIX.

Re: As long as sudo still works …

By BerkeleyDude • Score: 5, Informative Thread
(Also, holy shit, it’s 2024, and Slashdot still can’t handle Unicode…)

Binance Founder Changpeng Zhao Sentenced To 4 Months In Prison

Posted by BeauHD View on SlashDot Skip
Binance founder Changpeng Zhao has been sentenced to four months in prison after pleading guilty to charges related to enabling money laundering through his cryptocurrency exchange. CNBC reports:
The sentence handed down to Zhao in Seattle federal court was significantly less than the three years that federal prosecutors had been seeking for him. The defense had asked for five months of probation. The sentencing guidelines called for a prison term of 12 to 18 months. In November, Zhao struck a deal with the U.S. government to resolve a multiyear investigation into Binance, the world’s largest cryptocurrency exchange. As part of the settlement, Zhao stepped down as the company’s CEO.

Zhao, who wore a dark navy suit with a light blue tie to court, is accused of willfully failing to implement an effective anti-money laundering program as required by the Bank Secrecy Act, and of allowing Binance to process transactions involving proceeds of unlawful activity, including between Americans and individuals in sanctions jurisdictions. The U.S. ordered Binance to pay $4.3 billion in fines and forfeiture. Zhao agreed to pay a $50 million fine.

The wages of Sin

By Local ID10T • Score: 3 Thread

4 months at club fed… billions in profits stashed away safely

Welp

By rmdingler • Score: 3 Thread

The U.S. ordered Binance to pay $4.3 billion in fines and forfeiture. Zhao agreed to pay a $50 million fine.

Changpeng Zhao’s reputed net worth is just north of US$30 billion. $50 million, to him, is like what you’d tip the paperboy in another lifetime.

Ye olde two tiered justice system at work.

Bruce Perens Emits Draft Post-Open Zero Cost License

Posted by BeauHD View on SlashDot Skip
After convincing the world to buy open source and give up the Morse Code test for ham radio licenses, Bruce Perens has a new gambit: develop a license that ensures software developers receive compensation from large corporations using their work. The new Post-Open Zero Cost License seeks to address the financial disparities in open source software use and includes provisions against using content to train AI models, aligning its enforcement with non-profit performing rights organizations like ASCAP. Here’s an excerpt from an interview The Register conducted with Perens:
The license is one component among several — the paid license needs to be hammered out — that he hopes will support his proposed Post-Open paradigm to help software developers get paid when their work gets used by large corporations. “There are two paradigms that you can use for this,” he explains in an interview. “One is Spotify and the other is ASCAP, BMI, and SESAC. The difference is that Spotify is a for-profit corporation. And they have to distribute profits to their stockholders before they pay the musicians. And as a result, the musicians complain that they’re not getting very much at all.”

“There are two paradigms that you can use for this,” he explains in an interview. “One is Spotify and the other is ASCAP, BMI, and SESAC. The difference is that Spotify is a for-profit corporation. And they have to distribute profits to their stockholders before they pay the musicians. And as a result, the musicians complain that they’re not getting very much at all.” Perens wants his new license — intended to complement open source licensing rather than replace it — to be administered by a 501(c)(6) non-profit. This entity would handle payments to developers. He points to the music performing rights organizations as a template, although among ASCAP, BMI, SECAC, and GMR, only ASCAP remains non-profit. […]

The basic idea is companies making more than $5 million annually by using Post-Open software in a paid-for product would be required to pay 1 percent of their revenue back to this administrative organization, which would distribute the funds to the maintainers of the participating open source project(s). That would cover all Post-Open software used by the organization. “The license that I have written is long — about as long as the Affero GPL 3, which is now 17 years old, and had to deal with a lot more problems than the early licenses,” Perens explains. “So, at least my license isn’t excessively long. It handles all of the abuses of developers that I’m conscious of, including things I was involved in directly like Open Source Security v. Perens, and Jacobsen v. Katzer.”

“It also makes compliance easier for companies than it is today, and probably cheaper even if they do have to pay. It creates an entity that can sue infringers on behalf of any developer and gets the funding to do it, but I’m planning the infringement process to forgive companies that admit the problem and cure the infringement, so most won’t ever go to court. It requires more infrastructure than open source developers are used to. There’s a central organization for Post-Open (or it could be three organizations if we divided all of the purposes: apportioning money to developers, running licensing, and enforcing compliance), and an outside CPA firm, and all of that has to be structured so that developers can trust it.”
You can read the full interview here.

Percent Revenue licenses are abhorrent

By ThosLives • Score: 5, Insightful Thread

Let’s say I make an expensive product, say, an aircraft or giant factory. I use one instance of a piece of software covered by this license - why would that software warrant 1% (or any other percentage?) of revenue of the product?

We need laws to make all licenses be fixed price, not percent-revenue based. This would also fix all the FRAND nonsense, it would quiet the app store arguments, etc.

I know why sellers want the percent-revenue model, but it’s heinous; it would actually make it impossible to create certain products, if the sum of claims on revenue is too high a fraction (and in the limit, it could exceed 100%… if you use more than 100 components each demanding 1% or more of revenue.

Also what if some other part goes up in price, making my total product cost more, why should this organization get more income (based on higher overall revenue), just because other things cost more?

Re:Percent Revenue licenses are abhorrent

By dgatwood • Score: 4, Insightful Thread

This thing Perens is proposing isn’t really a license. It’s an enforcement agency. And yeah, if you’re used to ignoring the license because you don’t think it should apply to you then probably you’re not going to like it.

I think it’s more accurate to say that no business will touch any code written on this license, because everyone assumes that they will eventually have enough revenue to have to pay the licensing fee, and that license fee is likely to exceed the value you’ll get from that software. The only companies that are likely to derive more than 1% of their income from some open source library are things like cloud companies that derive huge chunks of income from allowing people to use open source software on their hardware, but they can always dodge the costs by requiring customers to install the software themselves, and you’ll be back to square one.

Also, you can hire a contractor to rewrite a decent amount of code for $50k. So in many cases, during the last year before they hit $5 million in revenue, they’ll hire someone to write a replacement for the code, and then leave the consortium and pay zero.

The general concept is sound. Having a nonprofit responsible for collecting and distributing licensing fees for open source is a good idea. And it isn’t unreasonable to not charge fees to companies that make less than some threshold amount.

However, making it always be a percentage of the company’s income doesn’t make much sense to me. Developers should be allowed to choose a fee structure that makes sense to them. Other options include a per-copy cost, a blanket cost, a per-user cost, or some combination thereof (e.g. “the greater of $X per copy or $Y per user”). Blanket licensing fees could be as a percentage of revenue for the product of service or a fixed amount, at the developer’s discretion.

But a flat-fee license of 1% of income without regard to how much licensed code a company uses doesn’t make much sense. A few companies that rely very heavily on licensed code will be massively undercharged, and the vast majority of companies will be massively overcharged, and the latter won’t want to join at all, and will choose other code instead. So you’ll basically end up with the only people who join the consortium being content creators and leeches. That’s not a healthy funding model.

Change Healthcare Hackers Broke In Using Stolen Credentials, No MFA

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from TechCrunch:
The ransomware gang that hacked into U.S. health tech giant Change Healthcare used a set of stolen credentials to remotely access the company’s systems that weren’t protected by multifactor authentication (MFA), according to the chief executive of its parent company, UnitedHealth Group (UHG). UnitedHealth CEO Andrew Witty provided the written testimony ahead of a House subcommittee hearing on Wednesday into the February ransomware attack that caused months of disruption across the U.S. healthcare system. This is the first time the health insurance giant has given an assessment of how hackers broke into Change Healthcare’s systems, during which massive amounts of health data were exfiltrated from its systems. UnitedHealth said last week that the hackers stole health data on a “substantial proportion of people in America.”

According to Witty’s testimony, the criminal hackers “used compromised credentials to remotely access a Change Healthcare Citrix portal.” Organizations like Change use Citrix software to let employees access their work computers remotely on their internal networks. Witty did not elaborate on how the credentials were stolen. However, Witty did say the portal “did not have multifactor authentication,” which is a basic security feature that prevents the misuse of stolen passwords by requiring a second code sent to an employee’s trusted device, such as their phone. It’s not known why Change did not set up multifactor authentication on this system, but this will likely become a focus for investigators trying to understand potential deficiencies in the insurer’s systems. “Once the threat actor gained access, they moved laterally within the systems in more sophisticated ways and exfiltrated data,” said Witty. Witty said the hackers deployed ransomware nine days later on February 21, prompting the health giant to shut down its network to contain the breach.
Last week, the medical firm admitted that it paid the ransomware hackers roughly $22 million via bitcoin.

Meanwhile, UnitedHealth said the total costs associated with the ransomware attack amounted to $872 million. “The remediation efforts spent on the attack are ongoing, so the total costs related to business disruption and repairs are likely to exceed $1 billion over time, potentially including the reported $22 million payment made [to the hackers],” notes The Register.

Not the true cost

By gtall • Score: 3 Thread

The true cost includes all that + the damage the criminal will now cause with their $22 million, and the example of how crime pays to the other criminal organizations and what price they can expect from their new exploits.

It’s not just MFA

By Murdoch5 • Score: 3 Thread
Why wasn’t the session doing geolocation and IP lookup? Even if someone got the credentials, they shouldn’t have worked from the wrong location(s). Tie that in with address verification, and hardware based MFA (not SMS based), and you have a semi-decent login system. You could extend it by additional layers, and really for Health Care you want TFA or Three Factor, which would be something like Password + Yubi + Fingerprint. Geolocation and address verification would bring it up to FFA or Four Factor, possibly Five Factor.

When will it sink in that doing the minimum is never good enough? How many executives and managers said (to paraphrase): “Least viable effort”, when discussing the requirements, or cut funding, or went with the populate option because they heard of it before. This has all the hallmarks of bad design, through intentional bad design.

Cyber Insurance

By EvilSS • Score: 4, Interesting Thread
“OK, well, you’re on your own. Thanks, we’ll see ourselves out” - Change Healthcare’s Cyber Insurance company hopefully.

These companies made a big push a couple of years ago to make MFA mandatory for renewals. Not having MFA on an external facing Citrix login portal is just inexcusable these days. It’s been supported by Citrix for literally decades in one form or another.

Extreme Heat Continues To Scorch Large Parts of Asia

Posted by msmash View on SlashDot Skip
Large swathes of Asia are sweltering through a heatwave that has topped temperature records from Myanmar to the Philippines and forced millions of children to stay home from school. From a report:
In India, record temperatures have triggered a deadly heatwave and concerns about voter turnout in the nation’s marathon election. Extreme heat has also forced Bangladesh to close all schools across the country. Extreme temperatures have also been recorded in Myanmar and Thailand, while huge areas of the Philippines are suffering from a drought. Experts say climate change has made heatwaves more frequent, longer and more intense, while the El Nino weather phenomenon is also driving this year’s exceptionally warm weather.

Approximate voter turnout data after polls closed on April 26 in India — when stage two of the nation’s seven-stage general election took place — put voter turnout at 61 per cent. This was lower than the 65 per cent in the first phase, and 68 per cent in the second phase five years ago. Among the states that headed to the polls last week was Kerala in the south, where media reports on April 29 said that at least two people — a 90-year-old woman and a 53-year-old man — were suspected to have died of heatstroke. Temperatures in Kerala soared to 41.9 deg C, nearly 5.5 deg C above normal temperatures. At least two people have also died in India’s eastern state of Odisha, where temperatures hit 44.9 deg C on April 28 — the highest recorded in April. In neighbouring Bangladesh, students will continue to stay home this week, after schools across the country were ordered shut on April 29. A two-judge bench of the country’s High Court passed an order directing all primary and secondary schools and madrasahs (Islamic schools) nationwide to remain closed till May 5, affecting an estimated 32 million students.

Temperature Conversions …

By CaptainDork • Score: 5, Informative Thread

41.9 degrees Celsius is approximately 107.42 degrees Fahrenheit.

44.9 degrees Celsius is approximately 112.42 degrees Fahrenheit.

Re:Solar Cycle 25.

By LazarusQLong • Score: 4, Informative Thread
scroll all the way to the bottom to see the relevant data.

https://xkcd.com/1732/

Re:More nuclear fission power plants?

By hey! • Score: 4, Interesting Thread

It was never the case that the public being scared caused nuclear to be outlawed, or even *discouraged*. The problem is that investors are scared by the high capital costs, long construction times, and uncertainties about future electricity prices.

This is why nuclear requires government subsidies, either in straight grants, loan guarantees or price guarantees. It’s no coincidence that the only country in the world that did a serious nuclear crash program was France, where the electric system was *nationalized*. They didn’t go in big for nuclear to make a profit, for them it was a national security issue in result of the OPEC oil embargos. As soon as France privatized its electric system, nuclear construction stalled, just like it did in every other privatized system.

In any case, even if we *were* to underwrite a crash nuclear program, it’s neither necessary nor desirable to put *all* our eggs in the nuclear basket. One place we can put investment in is a modernized grid. This will not only help renewable sources like wind and solar, it will be a huge boon to nuclear plants, eliminating questionable siting choices that were driven by the need to locate the plant within 50 miles of customers.

Re:Temperature Conversions …

By istartedi • Score: 4, Interesting Thread

And if you haven’t experienced these temperatures you need to understand that those not accustomed to them can’t do much of anything when it’s that hot. I’ve experienced close to the higher of these two temperatures in Death Valley, and mild exertion was not sustainable. It was life threatening without an air conditioned car to get back to. The lower of these comps to being in my house when there was no air conditioning. What happens after a while is you’re just consumed with keeping cool and can’t focus on much else. A spray bottle and a fan helps a little, but if you’re not wet and the air isn’t dry, then there’s a point where the fan stops acting to cool you and actually heats you up—it’s a low-grade convection oven effect.

Motorcyclists are aware of this, they even have a chart out there somewhere that shows the break-even point where the wind stops cooling you and starts baking… but dang, all the links that I could find easily are badly enshittified. Just trust me, bikers will feel slightly *warmer* when riding at highway speeds in temperatures above 95F.

Some people can actually acclimate to these temperatures. They generally know who they are. The body is an amazing thing, but I’m sure even those people have their limits.

Supreme Court Declines To Block Texas Porn Restriction

Posted by msmash View on SlashDot Skip
The Supreme Court on Tuesday refused to block on free speech grounds a provision of Texas law aimed at preventing minors from accessing pornographic content online. From a report:
The justices turned away a request made by the Free Speech Coalition, a pornography industry trade group, as well as several companies. The challengers said the 2023 law violates the Constitution’s First Amendment by requiring anyone using the platforms in question, including adults, to submit personal information.

One provision of the law, known as H.B. 1181, mandates that platforms verify users’ ages by requiring them to submit information about their identities. Although the law is aimed at limiting children’s access to sexually explicit content, the lawsuit focuses on how those measures also affect adults. “Specifically, the act requires adults to comply with intrusive age verification measures that mandate the submission of personally identifying information over the internet in order to access websites containing sensitive and intimate content,” the challengers wrote in court papers.

Re:Something Something

By ArchieBunker • Score: 5, Insightful Thread

Are you on board with a small government?
>No, not at all
So shut the fuck up about it.

I don’t claim to be for small government and personal responsibility while at the same time creating laws that contradict my claims.

Found a moral loophole

By Hoi Polloi • Score: 5, Insightful Thread

Just say something is “to protect children” and you can ban anything

The Republican Supreme Court is a joke. Not funny.

By Anonymous Coward • Score: 5, Insightful Thread

The Republican Supreme Court is a joke that isn’t funny.

They call themselves “originalists” and the only thing they have a nexus to an origin is that they were all hatched the same.

Neither the elimination of Roe, the continued attack on people’s rights, states’ rights, etc. nor anything else speak to anything “originalist.”

The founding fathers would be turning in their graves, and THIS republican supreme court would dig them up and put stakes in them so “them’s quit turning.”

We have the most corrupt self-dealing judge, and a bunch of other pieces of crap. This is our lauded supreme court.

The Republican Supreme Court, a result of the Republican Party, is no joke. It’s not funny. It’s what will destroy our democracy.

Re:That was dumb

By skam240 • Score: 5, Insightful Thread

It also says states can’t regulate their trade between states, that’s the feds job.

How this might apply here is if the porn sites are hosted outside of Texas.

Re:They already have that info

By alexgieg • Score: 5, Insightful Thread

Kids are just not ready for some adult stuff until older.

Until around the mid-18th century, when people in English-speaking countries became wealthy enough to afford living in houses with more than a single room and, by consequence, the very novel (at the time) concept of personal privacy came about, parents, grandparents, children and other family members all lived and slept within that one room.

In that one room the parents had sex. Right besides their old folk and the children. Sometimes the old folk had energy to have sex too. And yes, the children were frequently awake and watching. That includes all of you great-great-great-…-great-grandparents, and all their ancestors.

Besides that, almost all children worked in animal husbandry, helping quite directly several species of domesticated animals to mate, from goats and sheep to cattle and horses. What they saw when doing that was no different from what they saw their parents and grandparents doing at night.

That’s how humanity lived for most of the last 12,000 years. And, somehow, those 600+ generations of children neither had any trouble “being ready” for any of that, nor came out of it mentally broken in any way whatsoever.

So, from where, exactly, came this weird myth so many conservatives hold that present-day children are in some way different from the children of old, and cannot deal with direct knowledge of sexual acts? What is the origin of this nonsense?

How an Empty S3 Bucket Can Make Your AWS Bill Explode

Posted by msmash View on SlashDot Skip
Maciej Pocwierz, a senior software engineer Semantive, writing on Medium:
A few weeks ago, I began working on the PoC of a document indexing system for my client. I created a single S3 bucket in the eu-west-1 region and uploaded some files there for testing. Two days later, I checked my AWS billing page, primarily to make sure that what I was doing was well within the free-tier limits. Apparently, it wasn’t. My bill was over $1,300, with the billing console showing nearly 100,000,000 S3 PUT requests executed within just one day! By default, AWS doesn’t log requests executed against your S3 buckets. However, such logs can be enabled using AWS CloudTrail or S3 Server Access Logging. After enabling CloudTrail logs, I immediately observed thousands of write requests originating from multiple accounts or entirely outside of AWS.

Was it some kind of DDoS-like attack against my account? Against AWS? As it turns out, one of the popular open-source tools had a default configuration to store their backups in S3. And, as a placeholder for a bucket name, they used… the same name that I used for my bucket. This meant that every deployment of this tool with default configuration values attempted to store its backups in my S3 bucket! So, a horde of misconfigured systems is attempting to store their data in my private S3 bucket. But why should I be the one paying for this mistake? Here’s why: S3 charges you for unauthorized incoming requests. This was confirmed in my exchange with AWS support. As they wrote: “Yes, S3 charges for unauthorized requests (4xx) as well[1]. That’s expected behavior.” So, if I were to open my terminal now and type: aws s3 cp ./file.txt s3://your-bucket-name/random_key. I would receive an AccessDenied error, but you would be the one to pay for that request. And I don’t even need an AWS account to do so.

Another question was bugging me: why was over half of my bill coming from the us-east-1 region? I didn’t have a single bucket there! The answer to that is that the S3 requests without a specified region default to us-east-1 and are redirected as needed. And the bucket’s owner pays extra for that redirected request. The security aspect: We now understand why my S3 bucket was bombarded with millions of requests and why I ended up with a huge S3 bill. At that point, I had one more idea I wanted to explore. If all those misconfigured systems were attempting to back up their data into my S3 bucket, why not just let them do so? I opened my bucket for public writes and collected over 10GB of data within less than 30 seconds. Of course, I can’t disclose whose data it was. But it left me amazed at how an innocent configuration oversight could lead to a dangerous data leak! Lesson 1: Anyone who knows the name of any of your S3 buckets can ramp up your AWS bill as they like. Other than deleting the bucket, there’s nothing you can do to prevent it. You can’t protect your bucket with services like CloudFront or WAF when it’s being accessed directly through the S3 API. Standard S3 PUT requests are priced at just $0.005 per 1,000 requests, but a single machine can easily execute thousands of such requests per second.

The real lesson here

By rknop • Score: 5, Insightful Thread

Create a cloud account at your own risk!

When you’re subject to usage pricing, you never know what sort of unintended interaction will cause your bill to go nuts.

I would not be surprised if this weren’t the only example of somebody not doing anything wrong, and yet incurring huge charges for things they didn’t do.

Re:How to be an idiot

By Anonymous Coward • Score: 5, Informative Thread

That should be the name of the article. Logging or not, dumb fuck left his S3 bucket publicly accessible and writable. Just another “tech writer” on medium without any tech chops.

Read closer. Initially it was not writable. Just the attempts were being charged to their account.

Cost-based shutdown

By Vlijmen Fileer • Score: 5, Insightful Thread

This is why cloud service should have a cost-based shutdown option.

It’s easily /the/ most obvious piece of functionality that should have been put in place on the second day of coding things like AWS and Azure. And yet it is missing. The reason is obvious: the companies owning those cloud services make money by denying people this option. Specifically individuals using the environments for development, or perhaps just learning, and who obviously do not have the means to legally fight such companies.

And it’s really not that difficult; making resources unavailable from internet,or stopping or outright deprovisioning them when usage leading to cost above some limit is detected.

“Bezos! Got them bezos! First one’s free!”

By Pseudonymous Powers • Score: 5, Interesting Thread

“Okay, I can’t find a job in software that doesn’t require me to know AWS backwards and forwards, so I guess you got me. I’m signing up for the free tier.”

“Excellent. Let’s get you set up. We’ll just need a credit card number.”

“Wait, why do you need a credit card number for the free tier?’

[points and screeches like a pod person]

Re:Cost-based shutdown

By Anonymous Coward • Score: 5, Interesting Thread

This is why cloud service should have a cost-based shutdown option.

They do, all sorts of metric quotas and handler actions.

One big detail that jumped out at me was:
“Other than deleting the bucket, there’s nothing you can do to prevent it. You can’t protect your bucket with services like CloudFront or WAF when it’s being accessed directly through the S3 API”

A shutdown bucket can still have queries made against it.
The response is “400 bad request - Invalid target”
A billable event

Worse, redirect requests aside, all errors no matter how critical that are due to your customer configuration are 400 errors.
Amazon reserves 500 errors for critical infrastructure failures, aka problems on their end.

I’m betting the only reason deleting the bucket works to stop this, as PUTs to a deleted bucket (invalid name) is also a 400 error, is that without a valid token (unauthenticated) amazon has no way to know who to bill…

Biden Administration Moves To Speed Up Permits for Clean Energy

Posted by msmash View on SlashDot
The Biden administration on Tuesday released rules designed to speed up permits for clean energy while requiring federal agencies to more heavily weigh damaging effects on the climate and on low-income communities before approving projects like highways and oil wells. From a report:
As part of a deal to raise the country’s debt limit last year, Congress required changes to the National Environmental Policy Act, a 54-year-old bedrock law that requires the government to consider environmental effects and to seek public input before approving any project that necessitates federal permits. That bipartisan debt ceiling legislation included reforms to the environmental law designed to streamline the approval process for major construction projects, such as oil pipelines, highways and power lines for wind- and solar-generated electricity. The rules released Tuesday, by the White House Council on Environmental Quality, are intended to guide federal agencies in putting the reforms in place.

But they also lay out additional requirements created to prioritize projects with strong environmental benefits, while adding layers of review for projects that could harm the climate or their surrounding communities. “These reforms will deliver smarter decisions, quicker permitting, and projects that are built better and faster,” said Brenda Mallory, chair of the council. “As we accelerate our clean energy future, we are also protecting communities from pollution and environmental harms that can result from poor planning and decision making while making sure we build projects in the right places.”

Local Permit Process

By Jedi Holocron • Score: 4, Interesting Thread

The local permitting process in municipal building departments is the real hold up.

Getting the required permits for rooftop solar in my municipality can run up to 6 months.

For heat pumps / mini-splits, you need Manual J, Manual S, Surveys, Architectural Plans, Building Permit, Electrical Permit, Plumbing Permit (if also doing heat pump water heater), etc…etc…lots of stupid paperwork and extra cost.

Insulation, need a permit.

New windows, need a permit.

If they want to speed up adoption of these newer technologies, they need to clean up the local building permit process and make it easier for the home owner AND the contractors to do the work in a timely and efficient manner.

Re:So, Biden took the legislation…and rewrote la

By Smidge204 • Score: 4, Informative Thread

Congress: “Streamline the approval process so permits get issued faster.”

Biden administration: “Okay. Here’s a memo to our various agencies on how to do that.”

Where’s the overreach? It’s literally the purpose of the Executive branch to implement the laws that Congress passes. Offering instruction on how to expedite green energy projects in addition to the other streamlining measures is 100% within the letter and intent of the law.

They didn’t alter anything; they did exactly as was required by law. Sucks for the people paying you to shit on anything that’s bad for the fossil fuel industry it I guess?
=Smidge=

Biden admin still not serious about global warming

By MacMann • Score: 3, Insightful Thread

I saw no mention of speeding up the permit process on nuclear power plants. If the Biden administration was serious about global warming and clean energy then nuclear power should get a mention. This is “Meatloaf energy policy”, as in “I will do anything for energy but I won’t do that! "

So long as politicians fear nuclear power more than global warming I find it difficult to take them seriously, or take global warming all that seriously. You want to tell me that we could all die if we don’t reduce CO2 emissions to zero before the end of X years? (Given some positive value of X.) Okay then, why not build more nuclear power plants? Too dangerous? Too expensive? The build time exceeds X years? I fail to understand how nuclear power, an energy source with a very long record of safety, is somehow a greater risk than the certainty of death from global warming. It is difficult to believe that nuclear power plants would cost more than the damage global warming would cause. If the build time for nuclear power plants is too long then maybe we should have a look at the delays caused by federal permits.

I can’t take politicians seriously on their claims on how global warming is a threat if they will not mention nuclear power as part of the solution. Note to the solar power shills, I said PART of the solution. There’s been numerous papers written from trusted public and private entities on how we need nuclear fission power as part of the mix of energy solutions or we will fail to lower CO2 emissions while still meeting expected energy needs. Our options are global warming, nuclear fission, or energy shortages. If there’s no mention of nuclear fission in a plan to address global warming then I expect energy shortages. When the energy shortages inevitably hit everyone then there is a panic for energy, then comes a return to digging up fossil fuels, and we go back to screams about how we are all going to die from global warming. This cycle has been repeating for something like 45 years, ever since the scare of radiation release from Three Mile Island and President Carter effectively killed the civil nuclear power industry.

This anti-nuclear scare mongering has been Democrat policy since Carter was in the White House (and Joe Biden was a senator from Delaware) up until Andrew Yang forced the Democrats to change their policies, at least on paper, by gaining significant support in his run for POTUS on a platform that included a plank in support of nuclear power. In the Democrat party platform document is a plank in support of nuclear fission power but I’ve yet to see any real actions to go with those words.

I keep hearing politicians talk about “all the above” energy solutions then back away from that once nuclear power is mentioned. So it is “all the above except nuclear fission”, or “Meatloaf energy”. These people fear nuclear fission more than global warming. I wonder why. Makes me wonder if they fear that the problem of global warming might be solved and they will have nothing to separate themselves from the Republicans. Go look at the energy planks in the platform documents from both the Democrat and Republican parties then tell me where they differ. I don’t see much of a difference in theory on paper, so why so much disagreement on energy policy in practice? I don’t know what they are thinking, and it’s been impossible to get a straight answer out of any of them on nuclear power since Andrew Yang forced the party to reconsider the issue.