Skip to content

Author: ObsoleteMadness

Mounting OnTrack Volumes in Windows

I use an old dynamic drive overlay tool called OnTrack to get around a 504MB drive-size BIOS limit on older machines. It works well enough for Dos and early versions of Windows. However, if you go try to mount the volume on another machine, it will appear to have no valid partitions!

SD Adaptor inside a laptop

OnTrack appears to put the original MBR at around sector 63, with the partition boot record (PBR) offset at sector 63 from there. This is quite different to a regular MS-DOS Drive layout - and this is caused by the customer loader OnTrack users to replace the BIOS routines.

On-Track starting up

To mount these volumes on Windows, we need a way to mount at an offset of 126 sectors (or 126 * 512 Sector Size = 64,512 bytes).

Fortunately, there is a utility that can do this: ImDisk.

ImDisk is an open-source virtual disk driver with many features, including RAM Disk and other support.

To mount my OnTrack volume using ImDisk, I need to know the physical disk # in Windows.

  1. Open an admin command prompt
  2. Run diskpart and view disk to get a list of disks. Note the 8GB disk is my SD Card, and it is Disk 2

    DISKPART> list disk
    
    Disk ###  Status         Size     Free     Dyn  Gpt
    --------  -------------  -------  -------  ---  ---
    Disk 0    Online          476 GB  1024 KB        *
    Disk 1    Online          476 GB      0 B        *
    Disk 2    Online         7580 MB    10 MB
  3. Exit diskpart.

Mounting the volume

  1. Open an admin command prompt
  2. Run the following command:
    imdisk -a -f \\.\physicaldrive2 -b 64512 -o ro -m x:

The volume is now mounted as X: Drive.

If it's not, double-double check your drive path and byte offset.

What is this doing?

  • -a - tells imdisk to attach a virtual disk
  • -f - specifies the file. In this case, we're using the Windows NT Physical drive path.
  • \\.physicaldrive2 - note this correseponds with disk 2 from diskpart.
  • -b 64512 tells it use use an offset, in this case our sector offset (63 + 63 * 512) for the start of the first partition.
  • -o ro - option of mounting as readonly. If you're unsure you've got the right drive and/or offset, make sure this option is set.
  • -m x: - mounts the drive as X:

Making it writeable

You'll need to unmount the disk (imdisk -D -m X:) and re-run the above command, without -o -ro.

Leave a Comment

Archiving Old Books

Looking through the bookshelf, it appears I have a few titles which at least the Internet Archive doesn't have.

I've enjoyed the use of books and magazines others have scanned and uploaded online, so I thought it only prudent I do the same. My printer has an ADF, so figured I'd give it a go!

Preparation

Unless you want to be scanning individual pages on a flatbed, or with some kind of camera set up, you'll need to prepare your books. Essientially, this involves cutting or removing the spine of the books.
This is clearly a destructive process, so may not be one everyone would like to do.

There's lots of advice that can be found online on how to do this. I didn't want to spend a lot of time or money for this step.

Ready to Scan

Instead, I went down my local Officeworks which has a service for this. For $1 a book, they'll use their fancy guillotine to cut and remove the spine. I suspect on thicker books it might need to be sliced a couple of times to be able to fit (I was advised they could do up to 250 80gsm pages at a time!).

Scanning

I'm lucky enough to have an MFC at home with an automatic document feede (a Konica Minolta Bizhub C35).

Scanner goes brrr

Scanner goes brr

I used the built-in "Windows Fax and Scan" application to connect to the scanner via WIA, and scanned directly to TIFF at 600 DPI. Note that the documents I'm using were all black-and-white, so used black-and-white mode to scan.

Preparing the PDF

I couldn't find a good free option for this. Instead, I used Foxit PDF Editor to convert from the scanned TIF image to PDF. It does deskewing and OCR for me which is really handy.

I made sure to fix up the page numbers in the PDF to match the pages in the books (it's a pet hate of mine when PDFs don't do this), and add some missing meta-data.

Issues

I've only done two so far and it's been reasonably smooth. Lessons so far:

  • For glued spined books, make sure you flip through every page and ensure each page is free. It'll save having to rescan pages which went through all at once.

  • Archive.org don't like the output from Foxit PDF. I had to run it through PDFTK to make Internet Archive happy :/

Uploads so far

Leave a Comment

PictSharp Updated

Get it from Github

I recently returned to a project I first started in 2017, PictSharp.

PictSharp is a C# native library for encoding bitmap images to to Apple's legacy PICT Format. I originally started it for the GopherServer project, as I wanted a way to convert modern images for old Macs which did not depend on external applications (both on the server and the client).

I revisted it to add support for .NET Core, but in the process ended up implementing support for 1-8bpp support, non-power of two image sizes, support for ImageSharp, as well as learning how GitHub actions work and publishing my first (public) nuget package!

While I suspect the number of people who want to create PICT images in 2022 is fairly small, it may assist those who want to learn about legacy formats or need to support older systems and formats (eg RTF which supports MacPict as one of the original v1 image formats).

Features

  • Implemented entirely in C# code. No native dependencies.
  • Supports .NET Framework 4.6.1+, .NET Core 3.1 and runtimes compatible with .NET Standard 2.0
  • Writes PICT 2.0 Images (so should work on a Mac II onwards with Color QuickDraw)
  • Supports 1bpp, 2bpp, 4bpp, 8bpp and 32bpp image encoding, with PackBits compression
  • Extensions available for ImageSharp and System.Drawing.Bitmap

What's still to be done?

Really just 16bpp support, but it'll probably be another 5 years until I get around to it. 16bpp is supported in PICT, but uses a different compression method to PackBits. Instead of working on individual bytes, 16bpp images are compressed by word (2 byte values).

  • More compression options (disabling compression, JPEG, etc)
  • A decoder
Leave a Comment

Gopher Clients Archive

I've started a Github repository to archive old Gopher Clients for various clients. As old websites are "upgraded" and taken offline, or old FTP servers get switched off, it's becoming more difficult to find these old clients.

Go check it out:
https://github.com/ObsoleteMadness/gopherclients

Pull requests which add more clients, archives of old FTP sites or descriptions and screenshots are very welcome.

I have some leave coming up and hope to revisit my old C# Gopher Server project. Plans include:

  • Port to .NET 6.0
  • Docker support
  • Better configuration and DI
Leave a Comment

MStar Datasheets

Scart to HDMI
A common example of the SCART to HDMI adaptors found on Aliexpress, eBay, etc

I've previously blogged about Chinese SCART adaptors. These are very low cost adaptors which convert RGB SCART to HDMI, with two main variations based on their board labels: SCART+HD2 HDMI 2014/12/23 and SHD1000 V1.

In the comments, readers identified that the main IC driving these is the MST6M182 (specifically mst6m182xst-z1). After much searching online and a hint from reader mmuman I managed to track down the datasheet or at least one close enough.

MST6m182VG Datasheet
The Mstar MST6m182VG Datasheet - link to Github

Of note to retro-gaming and retro-computing enthusiasts will be the registers for de-interlacing, including options to disable it entirely (USR_INTLAC). I'm keen to hook-up a micro and start fiddling with registers, as I suspect these could be a very good low-cost option if adequate control of the images was possible

In addition, I've created a source repository to collect these datasheets. It's available at https://github.com/ObsoleteMadness/MSTAR_Datasheets. If you have any further datasheets, please submit a pull request, as I'd be happy to put them up on that repo.

2 Comments

Halfix PC Emulator

I recently stumbled upon a PC emulator, Halfix on Github. The list of compatible Operating Systems (OS/2!) was quite comprehensive, and the code looked quite simple (especially in comparison to something like QEMU). Best of all, it supports Emscripten as a target, allowing for embedding in the web-browser (like v86).

If you want to check it out, it's hosted on the Obsolete Madness Labs with an OS/2.0 image. I hope to add some more (and noticed the author of Halfifx is also preparing some demos, which will hopefully increase it's popularity).

Halfix running in Chrome

Building

Building under Linux is no drama, just make sure you have NodeJS, Zlib, SDL and Emscripten installed. You might also need to modify makefile.js to remove the warning as errors flag:

var flags = ["-Wall", "-Wextra", /*"-Werror",*/ "-g3", "-std=c99"];

Then it's just a matter of running the "makefile" to get a binary:

node makefile.js

Under Windows, it's a little harder due to a lack of gcc natively. This was my approach:

  1. Clone the Halfix code from Github.
  2. Adjust makefile.js as per above to remove the -Werror flag
  3. Install Choclatey then install mingw: choco install mingw
  4. Install Emscripten
  5. Download SDL 1.2 and zlib and extract to .\deps in the Halfix directory
  6. Build Halfix for Win32
node makefile.js win32

If all goes well you should see halfix.exe ready to run!

You can download a test build to play with at https://github.com/pgodwin/halfix/releases/download/test/halfix.win32.zip

Or run it in your browser at https://labs.obsoletemadness.com/.

Leave a Comment