geerlingguy 2 days ago | next |

This is excellent news, as it should unblock having precompiled packages available for a number of applications for arm64—for me, most notably, OpenZFS: https://github.com/openzfs/zfs/issues/14511

sebazzz 2 days ago | root | parent |

Was cross compilation not an option?

saurik a day ago | root | parent | next |

My experience asking this question is that effectively no one understands how cross-compilation works (as is also seen here from the response involving nested virtualization)... which is really disappointing given that it causes even more chaos when people fail to understand that even deploying to their same architecture on Linux should be set up more similar to a cross-compiled build (to avoid any properties of the system bleeding into the resulting binary). As far as I can tell, people just think that compilers only can target the system they are on, and if they want to target other architectures, other operating systems, or even merely older systems, they have to run their build system on a machine equivalent to their eventual deployment.

pjmlp a day ago | root | parent | next |

What to expect when many don't even understand how linkers work, and include files scripting style to avoid learning them?

Installing a cross-platform targeting compiler toolchain is next level.

DrillShopper 17 hours ago | root | parent | prev |

I work for a healthcare company, and one of the things that we have to be able to do is reproduce our software for investigations. As a result, we build static cross-compilers pointing to a small system root extracted from the distribution we're building for but targeting the same architecture we're building on. In that way we can ensure that the host system dependencies are not embedded in the built result which means we can pull our compiler and system root out of archive and run it on practically any Linux system.

We usually keep archives of the software releases (even ones that are really, REALLY old and not out in service for the most part except for refurbs of old product), but being able to rebuild them and more importantly build a fixed version targeting the OS it originally targeted is really nice.

tliltocatl 19 hours ago | root | parent | prev | next |

Somewhat tangential, cross-compilation seems to have been frowned upon in Unix historically. A lots of things out there just assume HOST==TARGET.

relistan a day ago | root | parent | prev | next |

Our workload took nearly 18 minutes to cross-compile on their AMD64 runners. It builds on the AArch64 runners in 4 minutes. (Whole container I mean)

justincormack a day ago | root | parent |

Thats probably not a cross compile then, its an emulated compile. Cross compiling is basically the same speed.

relistan 14 hours ago | root | parent |

Sure you know what I meant. It’s an emulated compiler compiling natively. But the point is that building Aarch64 containers under emulation sucks and it doesn’t suck under a native build.

0x457 2 days ago | root | parent | prev |

I'm probably wrong, but I think this kind of cross-compilation requires a nested virtualization and GHA hosted runners don't support it.

yjftsjthsd-h 2 days ago | root | parent | next |

Why would it need virtualization at all? The point of cross-compiling is that you build binaries for a different arch/platform, ex. running gcc as an x86_64 binary on an x86_64 host turning C into aarch64 binaries.

AlotOfReading 2 days ago | root | parent | prev |

You can do cross compilation in GitHub actions and testing on QEMU is straightforward. I have a repo that builds for and tests half a dozen emulated targets.

ecnahc515 a day ago | prev | next |

While this is great, for people claiming they can now built multi-arch images without emulation, how are you planning on doing so? As far as I know, if you want to build multi-arch images on native runners for each platform, you basically need to:

* Configure a workflow with 1 job for each arch, each building a standalone single-arch image, tagging it with a unique tag, and pushing each to your registry

* Configure another job which runs at the completion of the previous jobs that creates a combined manifest containing each image using `docker manifest create`.

Basically, doing the steps listed in https://www.docker.com/blog/multi-arch-build-and-images-the-... under "The hard way with docker manifest ".

Does anyone have a better approach, or some reusable workflows/GHA that make this process simpler? I know about Depot.dev which basically abstracts the runners away and handles all of this for you, but I don't see a good way to do this yourself without GitHub offering some better abstraction for building docker images.

Edit: I just noticed https://news.ycombinator.com/item?id=42729529 which has a great example of exactly these steps (and I just realized you can just push the digests, instead of tags too, which is nice).

jhardy54 a day ago | root | parent |

Does build-push-action solve this? I haven’t used their multi-arch configs but I was under the impression that it was pretty smooth.

https://github.com/docker/build-push-action

trumpvoter a day ago | root | parent |

It runs in a single job, where single job = single runner. To use two runners/jobs to build multiplatform, each will need to push an untagged image and the shas are aggregated into a manifest in a third job. Definitely doable and the recipes will come out.

Personally prefer just using Go/ko whenever possible ;)

kylegalbraith 2 days ago | prev | next |

This is exciting to see as arm64 is really a growing space, as we've seen since first launching our Docker image build acceleration [0]. Free for public repos is definitely a strong pull if you can live with some of the quirks.

Even with this, building multi-platform Docker images with fast persistent caching in GitHub Actions will still be slow in the worst case and tedious in the best case.

We've also expanded into GitHub Actions runners, bringing our fast caching and faster compute into the actual runner.

We've done some cool things like making caching and disk access faster using ramdisks, Ceph, and blob storage [1]. We're offering Intel, ARM, and macOS runners at half the cost of what GitHub offers to private repos. We're also focused on accelerating even more builds outside of the runner. [2]

[0] https://depot.dev/products/container-builds

[1] https://depot.dev/blog/introducing-github-actions-ultra-runn...

[2] https://depot.dev/blog/introducing-depot-cache

eltondegeneres a day ago | root | parent |

Your landing and product pages don't mention macOS, only the pricing page, but the docs make it look like the macOS runners are the same price as Github's.

kylegalbraith a day ago | root | parent |

Yeah, this is definitely lacking on our pricing page; thank you for flagging it.

We charge $0.08/minute for macOS runners [0] which has 8 CPUs, 24 GB of memory, and 150 GB of disk. They run with M2 chips, so the closest GitHub-hosted macOS runner is the arm64 one with 6 CPUs at $0.16/minute [1].

It's also worth mentioning that we charge by the minute but track by the second. Whereas GitHub actually rounds up to the closest minute. So a 10-second build on Depot is 10 seconds, and you don't get charged a minute until you've accumulated a minutes worth of build time.

[0] https://depot.dev/docs/github-actions/runner-types#macos-run...

[1] https://docs.github.com/en/billing/managing-billing-for-your...

bhollis 2 days ago | prev | next |

We're using Go, so cross-compilation has never been a big problem (for producing artifacts). But this'll be great for testing on ARM. I'm interested to see the performance of these instances too - our experience has been that Amazon's Graviton processors have fantastic bang-for-buck vs. Intel/AMD.

ncruces a day ago | root | parent | next |

If you're using Go, you can also run tests with QEMU binfmt on Linux.

https://wiki.debian.org/QemuUserEmulation

Many people don't know this, but on a correctly configured amd64 Linux box this just works:

$ GOARCH=s930x go test

The test is cross compiled, and then run with QEMU user mode emulation.

Configuring this for GitHub Actions is a single dependency: docker/setup-qemu-action@v3

Also, if you want to test different OSes, there are a couple of actions to accomplish it.

I'll probably be integrating these Linux ARM instances, but this workflow should give you an idea of what was already possible with the existing runners:

https://github.com/ncruces/go-sqlite3/blob/main/.github/work...

CaliforniaKarl 2 days ago | prev | next |

This is awesome!!!

I switched from an Intel Mac to an Apple Silicon Mac a few months ago, and have been trying to do as much stuff as possible on ARM.

One thing this should do, is make people think more about switching their cloud-based workflows to ARM CPUs, which are generally less expensive.

verdverm 2 days ago | prev | next |

This is awesome, I have been dealing with weird errors in GHA for years when having to emulate for multi arch builds

mlhpdx 2 days ago | prev | next |

Nice. I had gone looking for this a week or two ago and was surprised it wasn’t available to me.

suryao 2 days ago | prev | next |

For cheaper (for private repos) and faster arm64 runners, check out what we're making at WarpBuild.

We also support spinning up self-hosted runners on your AWS/gcp/azure in just a couple of clicks.

mystified5016 2 days ago | prev | next |

Our CI runners live on a box in the corner of the office and their only operating cost is my time.

Paying someone for CI compute seems insane. The load is so variable that you never know if your monthly bill will be zero or several hundred/thousand dollars. I especially don't want my employees to consider that each and every push costs the company a nonzero amount of money. CI should be totally free and unrestricted. If a new employee has a really bad day and fires off a hundred CI runs (as we all have), I don't want to explain to accounting why there's an enormous spike in the bill.

It costs us a couple of my salaried hours a month to maintain our on-site infra. Far, far less than our present AWS bill. Most months it needs no attention. It just sits there and does its job. Hell, it's even solar powered.

lbotos a day ago | root | parent | next |

Ok.

You could:

- host your own set of static runners on AWS -- which, have a fixed monthly cost.

- pay a provider for hosted runners -- most providers bill in CI Minutes. So you will run out of minutes if jobs run amok, not run up your bill.

- Set up auto-scaling runners that ebb and flow based on demand. This case is the one that represents the risk you are describing of an unexpected bill increase.

2/3 cases of "paying someone else for CI compute" are just as predictable as your solution cost-wise. Yours could be cheaper, but the risk of "unexpected bill increase" is not really there.

tonymet a day ago | prev | next |

this makes distributing Raspberry Pi binaries a bit easier .I was running the Github actions runner on my raspi (which works petty well)

joshstrange a day ago | prev |

GitHub Actions is overpriced and slow. WarpBuild [0] is so much nicer. Our iOS build times dropped in half and cost less than the base macOS runner on GHA. It couldn’t have been easier, just set it up, changed the runner image name, and then I promptly forgot about it because it works the exact same way as before. GH secrets work, runs show up in the same place. It’s one of the only times I’ve improved performance and saved money all without changing anything on my end.

I did have to move my repos into an organization because you can only use WarpBuild with organizations, not personal accounts, but I probably should have been doing that anyway.

[0] https://www.warpbuild.com