Wyrm: Chipping away at ELF

Since Wyrm is utilizing a bootstrapped Scheme variant, building data structures and writing files is surprisingly difficult. Scheme provides the ability to express almost any construct, but without standard libraries every tiny detail most be determined.

The “actual” Wyrm system will eventually provide low level data structures and type support. Any construct utilized by “Mini Scheme” will require duplication between both the Wyrm Scheme implementation, and the “generic” Scheme based system. Writing the ELF decoder, the first obvious challenge to address is some form of structured data support. Normal scheme applications might use a SRFI-1 based “associative list”. Ideally, Wyrm will implement an associative container with “better performance”, but for now, utilizing a wrapped associative list works well. Given the newly defined “dictionary” type, the next hurdle is implementing basic support for serializing structures. For this, a new ‘encoding’ structure is created. With a defined dictionary type and serialization structure, implementing basic support for the primary ELF file header is simple.

So far, basic Test Driven Development (“TDD”) practice has allowed development of substantial infrastructure without gnarly scaffolding. The current ‘Wyrm’ program remains a simple “Hello World” display, but substantial support for ELF and “Wyrm” common scheme library support is present.

The huge challenge remains focus. Even basic decisions get easily bogged down if considering all the possible angles of a full toolchain ecosystem. Worse, there’s nearly infinite complexity possible if attempting to expand to a more modern feature set. But, without the massive set of refactoring and “intelligent” coding tools, any added functionality becomes massively distracting. A worthwhile detour might include integrating Visual Studio Code for improving quick reference documentation.

Wyrm: Baby Steps for ELF

With the July 4th Holiday, I enjoyed a 3-day weekend but intentionally limited the time I spent hacking on Wyrm. There’s a lot to unpack in creating a full operating system and toolchain (even with a limited scope). Instead of jumping into full fledged implementation, I took the opportunity to brainstorm and structure the project.

Given we’ve got a hugeeeee amount of work ahead in bootstrapping a kernel and toolchain, the big question becomes “where to start”. For this project, I’ll be trying to maintain a “Test Driven Development” practice. A unit test framework also creates a simplified environment for early development.

The first project milestone will be a “Hello World” constructed kernel from our toolchain. Qemu supports loading a binary ELF image, and most portable toolchains will work with ELF binaries. For the first project milestone, the Wyrm toolchain will construct a valid ELF kernel image with a “Hello World” assembly kernel. With ELF, the Wyrm toolchain may create images for either or fledging OS or the Linux ecosystem. The end-goal is a fully self-hosting system – but until that point, Linux or Windows can provide a host environment.

If someone forced me to select a ‘favorite’ programming language, I’d likely fall into the Python camp. Python does not, unfortunately, make for a good “system” programming language. However, both Julia and Nim advertise some degree of compiled / system programming features. For our toolchain, I’m going to pull a page out of Julia’s playbook and utilize the simplicity of Scheme for compiler and runtime implementation. With a strong Scheme toolchain, I hope to experiment with a maybe-Python / maybe-Scala frontend. With a scheme-work-alike, we can utilize a “proper” scheme implementation to bootstrap the system. I’ve selected “Chicken Scheme”.

With the few commits this weekend, there’s a small test framework and the start of some low level scheme primitives for building ELF files.

Introducing Wyrm

For a long time, I’ve maintained various iterations of low level operating system logic or programming language interpreters. The earliest iterations focused on recreating QBasic and DOS. Newer iterations focused on various technology stack ideas (the last being microkernel and exokernel based approaches). The only time my software stack ever ventured out to be seen by others was… as sample code for a job interview.

I’ll be covering this project on the podcast – but, before adding the glitz to the idea, I find myself wanting to sit and write about the idea. Starting with the why and for and what.

I’d be lying if I didn’t admit to a strong desire to build “the next thing”. And – I’d be lying to myself if I argued Wyrm had any hope of being the next thing. Instead, the mission of Wyrm is simple: a playground for OS and programming language conceptual development. My hope is to build upon (or create) some framework similar to the hello world staples provided at the OSDev Wiki. Instead of duplicating Unix and C, my intent for Wyrm is to explore the history of Amiga, Newton, and LISP machines. And, of course, duplicate Unix and C at some point.

I do not plan on supporting many hardware platforms – only the ARM, and likely only one or two available single board computers. I’m considering the Raspberry Pi 4, Asus TinkerBoard, and a QEmu Aarch64 machine for starters. This does presuppose that I manage to get the language itself into a workable state. As I don’t have a lot of time to dedicate toward the project, I suspect progress will be slow and be redirected to other ARM (or Risc-V) cores as time goes on.

I’m starting with a “blank slate” for this project. My goal will be to cover the fits and starts and pain associated with birthing an Operating System from scratch. There’s multiple toy OS projects out there – and multiple “real” projects – but developers tend to “wait” until some mythical “beta” period. Realistically, I don’t see myself having the time to hit such a milestone quickly. (Especially starting from the ground-up). That said, I’ve built many toy interpreters and kernels – so I suspect there’ll be something that appears at some point. From experience, a bootable “hardware” ARM kernel is a few weekends worth of effort. That said, my free weekends are few…

elfenix/wyrm: OS and Language Playground (github.com)

CS Topics: Hash Functions

Today we cover one of the primary building blocks for blockchain – Hash Functions.

References:

Distro Thoughts #1

So, after some consideration, I’ve decided to resurrect my previous efforts at building a Linux distribution. Mostly – because I’d like to tinker with a light-weight Linux that’s easily customizable. Something that really goes “back to basics”.

My first experiments were attempting to bootstrap a Clang/Musl build variant. Ugh!

My initial environment was Ubuntu 18.04 – with a modern C++ toolchain. I thought it’d be easy to populate a chroot environment, especially without cross compiling required. The LLVM code looks generally pretty clean – big, it does a lot, but clean. The build system though? See my previous comments on build system messes. The ‘one repo’ approach assumes a rather complete environment and does not bootstrap well to a new rootfs. I thought a hacky compile from source build would be neat – but this does not seem doable.

Too much time spent on this today, time to go outside and enjoy the sun.

Yes, I am alive…

…although, for the past 2 months I’ve had no access to a computer at home, besides my Mac laptop, which was at AppleCare for some of that time…. “The great mishap” happened on Friday, February 10th. A long story short, if your computer is freezing in linux at random intervals it’s probably a better idea to figure out why before you get that lovely ozone/burning silicon smell from melted components. To make matters more interesting, my file server went down about a week later. I managed to revive (or rebuild) my main computer a couple days ago, so I once again have rpath linux going. I have not recovered the file server yet, which will only turn on a few minutes before shutting itself off. As I don’t think I can really justify having a file/print server anymore however, I’m probably going to go out and purchase a huge hard drive and move everything unto that. Why didn’t I get everything working again sooner? Let’s just say that student teaching is probably the biggest time drain of all time… I had no clue what teachers went through before, grading papers, doing lesson plans, etc… Especially where I was working, where they have gone from teaching 5 classes to teaching 6. My basic schedule involved getting home at around 4:30 everyday, and then grading papers, working on lesson plans, updating worksheets, figuring out presentations on my ibook, (not to mention all the actual student teaching stuff), until 7:30-8:00 PM. Of course, student teaching didn’t start out that way – they start you out observing and then teaching preprepaired lessons and so on. Teaching in general is basically a 12 hour a day job, the best I can figure. Of course, everyone has been telling me that after your first year when you get everything figured out, it becomes an 8 hour a day job. I don’t see how that’s possible, but I’ll take their experience over my own limited experience. Anyway, if you wake up at 6:00 AM everyday to get to the school at 7:30 AM, and then work solid until 7:30 PM with only a 45 minute break to drive home, it’s not exactly on your mind to figure out how to repair a computer after that.

Which brings me to the good news, I’m through student teaching as of yesterday 4/7/06, and will officially recieve my degree next month. Which means, for the time being I’m taking some much needed time off to recoup, figure things out, and try to answer the non-spam, non-mailing list emails I’ve recieved over the past 2 months of computerlessness. Given the fact that my inbox currently has around 20,000+ emails in it, that might take some time. (I think 3/4 are spam however, but I can’t seem to get junk mail filtering working right with thunderbird…) If you did email me, and I did not respond, or were expecting an email from me and did not get it because you emailed me, or whatever, I’d highly recommend emailing me again, as the message could very well have gotten lost in the pile.

So, to summarize, I’m currently a solid 2 months behind in all things opensource, and given the pace at which things opensource evolve, that might was well be 2 years behind. I’ve got some serious catching up to do right now, as I don’t intend to leave the community even though I must admit I went on hiatus for a while there. Hopefully should be back on IRC again after moving some hard drives around and doing a reinstall to get current, my best guess would put this sometime on Tuesday. I’m taking the next several days off to go do something fun. I don’t know what yet, but I deserve it. This message was written with a repaired computer, with the old hard drive just plugged in and almost nothing is working right – I just wanted to leave a long note saying I was alive.

I owe a lot of people an apology for essentially dropping off the face of the Earth right now. Specifically, I think I’ve really slighted those at rpath. Dropping off the community for a bit, especially without contacting anyone to let them know what was going on, was very unkind especially given how great they have been. So, I apologize here, with the hope that everyone understands my brain has been mincemeat for a while. My instincts tell me that they probably need some help in packaging split X, split KDE, and the new docbook stuff which probably came out in the last 2 months. At least, that’s the major new stuff that seems to have gone on after I left. Maybe I can make some ammends by helping out. It’ll be interesting to see some real bugs after chasing down missing and extra semicolons in students’ code for the past 3 months. I’ll also be back on the lookout for bugs in the distribution, now that I’m installing it again.

I tried running a few conary updates to see if I could still work with a 2+ month old conary against whatever version is running on the server. It worked. Conary is definately ready to use now. I hadn’t seen much of anything major bugwise in January, and this last update “just worked”. (Not that I recommend using a 2+ month old conary, bug fixes and updates are good things…)

Best wishes to everyone, it’s good to be back.

Fun on a friday afternoon…


So, this Friday I went into surgery, for reasons I will not post publically. First, kudos to everyone at Lake Point Hospital, they were, quiet literally, great. Excepting the exhorbitant medical fees I’ll be paying, I couldn’t ask for better. After weeks of pain, I’m finally relieved that … there is no more pain, partly due to the large amounts of perscription pain pills floating around in my bloodstream.

So, to sum up: yay for perscription pain pills, yay for good doctors, and yay for an anesthesiologist that actually managed to put the right amount of juice in me to be knocked out for a surgery. (Unlike last time…..)

On the KDE front: KDE 3.4 packages are acomming! And will hopefully be done in time for the next release. This time I hope to actually have a patch against Kdemultimedia, so we can distribute noatun and friends legally (IE: without mpeg and mp3 support).

I’m also starting work on what I’m calling project Mini-Me linux. It’ll be based on conary, but diverge in a few key areas: 1. it’ll use uclibc and busybox for most of the utilities, 2. the init process will be very different. The main purpose of Mini-Me linux will be for a wireless router/access point/router/firewall. I’ve got the design mostly figured out, now I just need to execute!

Trudging Ahead

After spending some deal of time researching how mimetype/file associations are handled, I’ve managed to figure out how gnome and kde handle themselves. While GNOME and KDE have good user documentation, this type of thing is sadly missing. There is little documentation for packagers and distribution maintainers. I guess they expect us to be reading and contributing to the code. In any case, the good news is sometime in the future versions of Specifix the Gnome/KDE file association bug fixed.

Now that particular mole has been whacked, I’ve turned my attention to packaging other important packages. Namely, right now, xfce and scons, as well as updating KDE to the latest version. In other news, I got conary running on ppc, now all that remains is rebuilding everything for ppclinux. I’ve decided that my laptop needs reloaded with something more like Specifix first, I’ll probably go with Yellowdog or some similar.

In something non-Specifix related, I recently got a digital camera thanks to JForbes, which means…. puppy pictures. Check out my main page to see. I also have now completely revamped my website, to make it a bit more friendly for editing..

6/5/2003 Entry

GNU – the organization that keeps giving. I’ve currently got two different projects hosted on nongnu.org. All things given, it’s a lot better service than sourceforge. (Ahhh, sourceforge, home of twelve million dead projects, and a couple dozen of still useful ones.) But that’s not what I’m here to write about.

I’m here to write about the sickening pothole into hell that is autoconf. Autoconf, through some sickening joke, has managed to succeed as the #1 method of building makefiles. It’s used by all major projects, in some form or another. If it’s on GNU/Linux, it’s most likely configured by autoconf(unless we’re talking about X11, the only piece of software that makes autoconf look stylish.) But see, the funny thing is, I don’t think ANYONE quiet understands how Autoconf works outside of the three or four people that maintain it.

I would like to know for instance, why on half the projects I compile, after running configure once, and typing make, configure, for no apparent reason decides that it needs to run AGAIN. Each time running the SAME freaking tests OVER AND OVER AND OVER. I can go through half of them by memory now, from watching compile screen after compile screen after compile screen. Let me think “checking for unistd.h”, “checking whether build environment is sane”, “checking for BSD-compatible install”, and so on and so forth. One wonders if it ever occured to anyone that *maybe* this invention of a hard drive could *somehow* manage to actually store this information for later use.

But, I digress. It’s the very height of fun when installing program xyz-version 5.4.2, to sit and run through the 55,000 different options for this program. Is it –with-x today? Or –with-gtk? Or do I need to do a –enable-gnome? Oh the choices!

Of course, when things go wrong, that’s where the real fun begins. Oh no, the xyz-version 5.4.2 thinks that I have libabc installed in /usr/lib, when it’s really in /opt/kill/me/now/damn/it. Of course, the wonderful configure picked up on the fact that it exists, just not WHERE it exists. I don’t, however, know this immediately. Instead, I have to recompile the program, and redirect stderr(hint for n00bs its >&) to a file, tab through the file, and find the start of a 93 line long list of errors.

Now, if you where in this situation 5 years ago, you would know to take a quick look at the Makefile, and add a few more flags to one environment variable or another. So, you, being the seasoned Unix person you are, first go and pop up emacs(the one thing the GNU people got right), and open the Makefile, only to be suprised with a 50,000 line long monstrosity, with multiple, conflicting environment variables. Upon seeing this gateway to the seventh circle of hell,you of course, do one of the following: A. close the editor and try another piece of software, B. try to edit one of the 53 different .in, .am, .figtree, .bannana, .yoyo files that auto* uses, or C. shoot yourself in the head with a twelve gauge shotgun. Or, you could try to wade through the file, editing the 500 different $LIBS flags, all of which are completely useless except for the one three directories down called $(FOO_BAR_NONSENSE_NAME_VARIABLE), only to discover that having found the proper variable, FINALLY, running make this next time causes “configure” to rerun and trash all that hard work. Of course, you forgot to write down the file/line number or name of the variable, so you start the wonderful search all over again.

Let’s be honest. People don’t write configure.in files, they either A. use a tool to generate it, or B. copy them from someone else. What else would explain the same five errors that happen in all the programs that use autoconf. Finding documentation is even more of a challenge, you are left either purchasing a book(which will be out of date), reading an ancient howto(most of which doesn’t work properly), or wading through the even more wondrous “info” tree. It doesn’t help matters that the “info” tree seperates things out into 4 different packages, requiring you to flip from one to the other, to figure out why your configure.in is causing your makefile.am to puke when run through automake.

I’ve come to the conclusion that the auto* tools where designed by Satan himself in an attempt to drive me into a slow and deliberate madness. The sad thing is, I think it worked.

5/17/2002 Entry

Long time, no updates. Well, lots of news for here.
First, I quit my previous job. I left amicably w/ the company, but I doubt I would ever want to work there again. I have started working again on all of my pet projects. Most notably Fluid, Sasteroids, and the Linux distribution. (Fluid is going to be really Nifty for all of those X people out there.)

Finals are over, and I now have my grades, which means: I GET TO HAVE SOME TIME OFF. It’s about time too. I though I was going to go mad. I’m chillin right now, but I’m also working on a couple new articles(living with Xinerma, and an entire new bunch of SDL tutorials.)

For those that noticed, the past week or so, I’ve had my articles page here instead of the news. Quiet simply, I accidently overwrote this page with a school assignment(topic/etc.. not chosen by myself), sorry to everyone that showed up here and got confused… 😉