Search This Blog

Tuesday, November 13, 2018

Fun with OpenBSD for SPARC64 | High-Security O/S on 64-bit Sun UltraSPARC using QEMU Sun4U and User-mode Networking Back-End

Supratim Sanyal's Blog: Sun UltraSPARC 1
Sun Microsystems UltraSPARC 1
Image courtesy:

Supratim Sanyal's Blog: OpenBSD Logo
OpenBSD is among the few operating systems available today (along with the best O/S ever - Digital OpenVMS of course!) for the paranoid that can be the basis of installations requiring bullet-proof security. The official OpenBSD website says, "Only two remote holes in the default install, in a heck of a long time!". Given OpenBSD was released in 1995, the "heck of a long time" is 23 years. The first of the two holes was a OpenSSH vulnerability in 2002 that affected all operating systems using OpenSSH.. The second one, CVE-2007-1365 discovered eleven years ago, involved ICMP6 packets in OpenBSD's IPv6 implementation.

The versatile QEMU emulator project has matured enough to include stable emulation of the Sun-4U featuring SPARC V9 64-bit processor architecture. I took the opportunity to try out OpenBSD release 6.4 for SPARC64 using qemu-system-sparc64 hypervisor on OpenSUSE Tumbleweed running in a Oracle Virtualbox on CentOS 7 on a DELL PowerEdge R-710.

The OpenBSD installer ISO CD image (install64.iso)  was the fastest distribution download I have experienced yet, perhaps due to hosting on Cloudflare CDN.

A QEMU qcow2-format 4GB disk image was created using:

$ qemu-img create -f qcow2 -o size=4G openbsd-sparc-disk-1.4gb.disk

The downloaded install64.iso OpenBSD installer CD image was renamed for better identification to openbsd-sparc-install64.iso and QEMU SPARC-64 emulator fired up for installation:

qemu-system-sparc64 \
        -machine sun4u,usb=off \
        -realtime mlock=off \
        -smp 1,sockets=1,cores=1,threads=1 \
        -rtc base=utc \
        -m 1024 \
        -boot d \
        -drive file=openbsd-sparc-disk-1.4gb.disk,if=none,id=drive-ide0-0-1,format=qcow2,cache=none \
        -cdrom openbsd-sparc-install64.iso \
        -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-1,id=ide0-0-1 \
        -msg timestamp=on \
        -serial pty -nographic \
        -net nic,model=ne2k_pci -net user \
        -monitor stdio

"-boot d" boots the virtual Sun4U from the CD-ROM image. Also, "-net nic,model=ne2k_pci -net user" is the only QEMU networking model that works for the QEMU sparc64 hypervisor at this time; neither the tap nor the VDE (Virtual Distributed Ethernet) back-ends could establish network connection from the virtual machine successfully. The tap networking back-end caused a kernel panic and crashed the VM after a few pings went through to the internet. The VDE back-end did not cause a system crash, but did not provide a network connection at all either.  The two attempted and failed networking back-end parameters in the qemu command line were:

-net nic,model=ne2k_pci -net tap,ifname=inettap0,script=no,downscript=no
-net nic,model=ne2k_pci -net vde,sock=/tmp/vde-ip.ctl

QEMU launched successfully with the console connected to a virtual serial port that it identified:

QEMU 3.0.0 monitor - type 'help' for more information
(qemu) qemu-system-sparc64: -serial pty: char device redirected to /dev/pts/5 (label serial0)

On another host (OpenSUSE) terminal, the "minicom" serial port communications tool was used to connect to the Sun4U virtual console serial port (/dev/pts/5 in the above example). The first screen of the installer was displayed after the virtual SPARCstation booted up:

Supratim Sanyal's Blog: IInstalling OpenBSD SPARC 64-bit for Sun UltraSPARC using QEMU in SANYALnet Labs - Installation Screen #1\

Installation proceeded with no surprises.

Supratim Sanyal's Blog: IInstalling OpenBSD SPARC 64-bit for Sun UltraSPARC using QEMU in SANYALnet Labs - Installation Screen #2

Supratim Sanyal's Blog: IInstalling OpenBSD SPARC 64-bit for Sun UltraSPARC using QEMU in SANYALnet Labs - Installation Screen #3

Eventually installation completed successfully. The machine was then halted and QEMU stopped by entering "quit" at the "(qemu)" prompt.

Supratim Sanyal's Blog: IInstalling OpenBSD SPARC 64-bit for Sun UltraSPARC using QEMU in SANYALnet Labs - Installation Screen #4

QEMU was then launched again, this time with "-boot c" option to boot from the hard disk instead of the CD-ROM image:

qemu-system-sparc64 \
        -machine sun4u,usb=off \
        -realtime mlock=off \
        -smp 1,sockets=1,cores=1,threads=1 \
        -rtc base=utc \
        -m 1024 \
        -boot c \
        -drive file=openbsd-sparc-disk-1.4gb.disk,if=none,id=drive-ide0-0-1,format=qcow2,cache=none \
        -cdrom openbsd-sparc-install64.iso \
        -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-1,id=ide0-0-1 \
        -msg timestamp=on \
        -serial pty -nographic \
        -net nic,model=ne2k_pci -net user \
        -monitor stdio

The virtual SPARCstation booted up finefrom the hard-disk. At the "root device:" prompt, the device "wd0a" was provided. The default for the swap and dump devices were chosen.

Supratim Sanyal's Blog: IInstalling OpenBSD SPARC 64-bit for Sun UltraSPARC using QEMU in SANYALnet Labs - Installation Screen #5

Supratim Sanyal's Blog: IInstalling OpenBSD SPARC 64-bit for Sun UltraSPARC using QEMU in SANYALnet Labs - Installation Screen #6

Eventually the logon prompt was displayed.

Supratim Sanyal's Blog: IInstalling OpenBSD SPARC 64-bit for Sun UltraSPARC using QEMU in SANYALnet Labs - Installation Screen #6

OpenBSD comes with C and C++ compilers. It also provides a graphical X Windows environment; it should be possible to set up routing so that the user-mode network back-end forwards X windows data to an external X server for the display (QEMU SPARC 64 emulator does not support graphics yet).

Supratim Sanyal's Blog: IInstalling OpenBSD SPARC 64-bit for Sun UltraSPARC using QEMU in SANYALnet Labs - Installation Screen #7

Supratim Sanyal's Blog: IInstalling OpenBSD SPARC 64-bit for Sun UltraSPARC using QEMU in SANYALnet Labs - Installation Screen #8


The QEMU OpenBSD SPARC 64-bit virtual machine can be downloaded from my google drive. The root password of the virtual QEMU Sun4u is "password".

Wednesday, November 7, 2018

Adding a Couple of World's Biggest and Most Expensive Hard Drives: IBM 3390 DASD on S/390 Mainframe

IBM 3390 hard drives - direct access storage device
IBM 3390 direct access storage device
Picture courtesy: IBM

So I added a couple of the world's biggest and most expensive disk drives to IBM S/390 z/Architecture mainframe running Ubuntu 18 Linux. In real life, this would have set me back by almost half a million dollars in today's money. Fortunately, I actually spent $0.00 thanks to the rock-solid Hercules-emulated IBM S/390 running Ubuntu Linux 18.

Adding new storage to big-iron is a bit different from adding SCSI or IDE disks to more familiar computers! This post chronicles the steps.

On the host running Hercules, create the virtual disk image file using the dasdinit tool (part of Hercules); new DASD device with device-number 0122:

dasdinit -z -linux ./dasd/ubuntu-s390x.0122.disk 3390-3 0x0122 3200

Edit the Hercules configuration file and add the new dasd image filename for the new device number:

# .-----------------------Device number
# |     .-----------------Device type
# |     |       .---------File name and parameters
# |     |       |
# V     V       V
#---    ----    --------------------

# Display Terminals
0700 3270
0701 3270

# dasd
0120 3390 ./dasd/ubuntu-s390x.0120.disk
0121 3390 ./dasd/ubuntu-s390x.0121.disk
0122 3390 ./dasd/ubuntu-s390x.0122.disk

Then start Hercules and log into the emulated Ubuntu s390x as root.

Make sure Ububtu sees the new drive (although it is not available for use yet). Check for the device number in the output of the lszdev command:

root@s390x:~# lszdev
Reading device information: 100.0% (7/7)
TYPE         ID                 ON   PERS  NAMES
dasd-eckd    0.0.0120           yes  yes   dasda
dasd-eckd    0.0.0121           yes  yes   dasdb
dasd-eckd    0.0.0122           no   no
ctc          0.0.0a00:0.0.0a01  yes  yes   slca00
generic-ccw  0.0.0700           no   no
generic-ccw  0.0.0701           no   no

To activate the new  drive, use the chzdev command and verify with lszdev again:

root@s390x:~# chzdev -e 0122
ECKD DASD 0.0.0122 configured

root@s390x:~# lszdev
Reading device information: 100.0% (7/7)
TYPE         ID                 ON   PERS  NAMES
dasd-eckd    0.0.0120           yes  yes   dasda
dasd-eckd    0.0.0121           yes  yes   dasdb
dasd-eckd    0.0.0122           yes  yes   dasdc
ctc          0.0.0a00:0.0.0a01  yes  yes   slca00
generic-ccw  0.0.0700           no   no
generic-ccw  0.0.0701           no   no

Also use the lsdasd command to see the new drive in the list of drives:

root@s390x:~# lsdasd
Bus-ID     Status      Name      Device  Type  BlkSz  Size      Blocks
0.0.0120   active      dasda     94:0    ECKD  4096   2347MB    601020
0.0.0121   active      dasdb     94:4    ECKD  4096   1125MB    288000
0.0.0122   active      dasdc     94:8    ECKD  4096   2250MB    576000

At this point, the new uninitialized drive is available with the Linux device name dasdc. As usual, we partition the drive, but using the special fdasd tools (not fdisk). For my case, I just created one big partition spanning the entire drive.

root@s390x:~# fdasd /dev/dasdc
reading volume label ..: VOL1
reading vtoc ..........: ok

Command action
   m   print this menu
   p   print the partition table
   n   add a new partition
   d   delete a partition
   l   list known partition types
   v   change volume serial
   t   change partition type
   r   re-create VTOC and delete all partitions
   u   re-create VTOC re-using existing partition sizes
   s   show mapping (partition number - data set name)
   q   quit without saving changes
   w   write table to disk and exit

Command (m for help): v
Please specify new volume serial (6 characters).
current     : 0X0122
new [0X0122]:

volume identifier changed to '0X0122'

Command (m for help): n
First track (1 track = 48 KByte) ([2]-47999):
Using default value 2
Last track or +size[c|k|m|g] (2-[47999]):
Using default value 47999

Command (m for help): p

Disk /dev/dasdc:
  cylinders ............: 3200
  tracks per cylinder ..: 15
  blocks per track .....: 12
  bytes per block ......: 4096
  volume label .........: VOL1
  volume serial ........: 0X0122
  max partitions .......: 3

 ------------------------------- tracks -------------------------------
               Device      start      end   length   Id  System
          /dev/dasdc1          2    47999    47998    1  Linux native

Command (m for help): w
writing volume label...
writing VTOC...
rereading partition table...

Now at last a familiar command to format the partition with ext4 file system:

root@s390x:~# mkfs.ext4 -t small /dev/dasdc1
mke2fs 1.44.1 (24-Mar-2018)
Creating filesystem with 575976 4k blocks and 576000 inodes
Filesystem UUID: a0010741-a0f4-4465-9629-6fd9a32a2bbc
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912

Allocating group tables: done
Writing inode tables: done
Writing superblocks and filesystem accounting information: done

The new DASD volume is now ready for use, and can be mounted to a suitable mount-point, or for automatic mounting at boot, to /etc/fstab.

root@s390x:/# mount /dev/dasdc1 /mnt

Many thanks to Frank's post for these steps.

Sunday, October 21, 2018

Bionic Beaver on Z/Architecture: My Personal Mainframe IBM S/390 running Ubuntu 18 Linux on Hercules on OpenSUSE Tumbleweed

Supratim Sanyal's Blog: IBM S/390 Hercules Emulator Consoler running ubuntu linux on OpenSUSE tumbleweed on oracle virtual box in SANYALnet Labs
Hercules IBM Z/Architecture Mainframe Emulator Console

IBM S/390 Picture courtesy of The Computer Sheds
IBM S/390
Picture courtesy of The Computer Sheds
As a Digital alumnus with reverence for all things DEC, competitor IBM's big iron mainframes and operating systems have always been a curiosity. So far, my IBM experience has been with PC-DOS (here is a PC DOS 2000 based internet-facing web server), their incredible IBM OS/2 Warp, and recently IBM AIX on a virtual box.

Inspired by Astr0baby's blog post and Jeff Sipek's guide, I decided to install Ubuntu 18 "Bionic Beaver" Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-36-generic s390x) on a Hercules-emulated IBM S/390 Mainframe computer running on OpenSUSE Tumbleweed inside a Oracle VirtualBox appliance. This is the first mainframe-class machine emulated at SANYALnet Labs.

Hercules 4.0 Hyperion mainframe emulator was used for the guest S/390.  "Hercules is an open source software implementation of the mainframe System/370 and ESA/390 architectures, in addition to the latest 64-bit z/Architecture." - Hercules official web page.

OpenSUSE Packages

Packages installed in preparation of OpenSUSE Tumbleweed as the build and execution environment for Hercules hypervisor include the following. The standard zypper package management tool for OpenSUSE was used.

# zypper install bridge-utils uml-utilities tunctl net-tools-deprecated ipcalc git cmake vde2 libcap-progs libpcap-devel libpcap1 pcapdump pcapinfo
# zypper install -t pattern devel_C_C++


OpenSUSE Tumbleweed; IP:
Guest: S/390, IP Address
DNS: (google DNS)

OpenSUSE's firewall was interfering with the guest S/390s ability to resolve domain names via DNS and access the internet over HTTP(S), both during installation and post-installation of Ubuntu-18 for s390x.  For example, the following message was observed during Ubuntu s390x installation in the guest:

Supratim Sanyal's Blog: Ubuntu s390x Installer Error accessing Archive Mirrors over Internet
Ubuntu s390x Installer Error accessing Archive Mirrors over Internet
To get around this problem, the host (OpenSUSE) firewall daemon "firewalld" was disabled completely  and a startup script was added to flush and clear iptables rules at boot time.

# systemctl disable firewalld
# systemctl stop firewalld

OpenSUSE host network setup executable script at /root/netsetup/


The following was added to /etc/init.d/after.local to get the network setup script to execute at boot time:

# --
# /etc/init.d/after.local
# --
touch /forcefsck
/root/netsetup/ > /tmp/ 2>&1
exit 0

Also the after-local service was enabled for /etc/init.d/after.local script to execute at boot-time:

# systemctl enable after-local.service
# systemctl start after-local.service

IBM S/390 Mainframe in Hercules and Ubuntu 18 (s390x) installation

Supratim Sanyal's Blog: Hercules IBM S/390 z/arch emulator startup and CD-ROM boot command
Hercules startup and CD-ROM boot command

Astr0baby's instructions were followed for rest of the installation of Ubuntu 18 s390x on the guest. The full command to mount the downloaded distribution CD-ROM ISO image on OpenSUSE's /mnt directory is:

$ sudo mount -t iso9660 -o loop ubuntu-18.04.1-server-s390x.iso /mnt

The following hercules.cnf file was used:



Hercules had to be launched from OpenSUSE root account; even sudo from a user account did not work for letting Hercules access the tun adapter completely for networking. This is despite setting permissions on the /dev/net/tun, /use/local/bin/hercifc etc. as described in "Hercules Version 4: TCP/IP networking with Hercules".

Since any desired Ubuntu packages could be installed later, only the "SSH server" option was selected in addition to the Ubuntu base install in the installation software selection options screen.

The actual Ubuntu 18.04 s390x installation turned out to be uneventful. It followed a similar path to Ubuntu installation on x64. Automatic post-installation reboot did not work as Hercules stopped with halt when the guest operating system shut down for reboot. Exiting and relaunching the emulator and booting the guest operating system worked fine. Here is a video captured during the installation process:

Automatic boot-up of guest Ubuntu s390x on startup of Hercules hypervisor was achieved by creating a file "hercules.rc" in the same directory as "hercules.cnf" with a line containing the same command used at the Hercules prompt to boot manually:

ipl 120

Pressing ESC in the Hercules console screen toggles between a "graphical" view of the S/390 showing processor registers, the processor status word/flags, CPU usage, disk and network I/O etc. as in the example at the top of this post.

Anything typed in starting with a period at the Hercules console's "herc ====>" prompt is sent on to the virtual guest directly (i.e. not processed by the emulator itself). Therefore, even if SSH access to the Ubuntu s390x guest is unavailable, it is possible to login to Ubuntu s390x by entering .username and .password starting with a period (i.e. a dot) at the beginning on the Hercules console, and execute Linux commands by typing them in starting with dots the same way.

It is exciting to be able to run a mainframe version of Ubuntu as a hobbyist system!




You can download free snippets of the experiment's session logs from my google drive. In addition. here are some random images of screenshots taken during having all this fun!

Saturday, October 6, 2018

Pandora FMS and eHorus - a great integrated network monitoring and SaaS cloud-based remote management system

Supratim Sanyal's Blog: eHorus integration with Pandora FMS at SANYALnet Labs
eHorus integration in Pandora FMS web interface (Processes vie)

After playing around with the usual network monitoring tools, all of them impressive (Nagios, PRTG, Zabbix, Zenoss), I have settled down on Pandora FMS for a few years to monitor hobbyist servers in SANYALnet Labs. With solid agent-based real-time performance monitoring and alarming capabilities and an impressive "recon" task with automatic network hierarchy discovery and visual network mapping features, Pandora FMS has been serving me very well.

After a recent upgrade to the latest Pandora FMS distribution, I discovered it supports seamless integration with the eHorus cloud-based remote management system (SaaS) for total command and control of my network nodes right from inside the Padora FMS web interface as well as the eHorus portal internet web-site.

The steps to deploy eHorus and the required registration form and agent downloads are described pretty well at the eHorus web-site. The free tier allows up to 10 nodes and one concurrent user - quite enough for a hobbyist environment like mine.

I started off by registering an account at the eHorus portal and installing the CentOS 7 64-bit eHorus agent on my Dell PowerEdge R710 virtualization host that runs a bunch of SANYALnet Labs hobbyist nodes.

downloaded and installed the eHorus agent for 64-bit CentOS 7 following these instructions.The only change I made to the /etc/ehorus/ehorus_agent.conf file is to substitute my real eHorus userid in the "#eh_user USER" parameter in the config file.

I then enabled and started the ehorus_agent_daemon using the systemctl command.

# systemctl enable  ehorus_agent_daemon
# systemctl start ehorus_agent_daemon
# systemctl status  ehorus_agent_daemon
● ehorus_agent_daemon.service - LSB: eHorus Agent startup script
   Loaded: loaded (/etc/rc.d/init.d/ehorus_agent_daemon; bad; vendor preset: disabled)
   Active: active (running) since Fri 2018-10-05 23:55:20 UTC; 2h 13min ago
     Docs: man:systemd-sysv-generator(8)
   CGroup: /system.slice/ehorus_agent_daemon.service
           └─20940 /usr/bin/ehorus_agent -f /etc/ehorus/ehorus_agent.conf

Oct 05 23:55:18 dell-poweredge-r710.sanyalnet.lan systemd[1]: Starting LSB: eHorus Agent startup script...
Oct 05 23:55:19 dell-poweredge-r710.sanyalnet.lan ehorus_agent_daemon[20908]: 2018-10-05 23:55:19 [log][2] WARNING: no pas...t!
Oct 05 23:55:20 dell-poweredge-r710.sanyalnet.lan ehorus_agent_daemon[20908]: eHorus Agent is now running with PID 20940
Oct 05 23:55:20 dell-poweredge-r710.sanyalnet.lan systemd[1]: Started LSB: eHorus Agent startup script.
Hint: Some lines were ellipsized, use -l to show in full.

Checking the eHorus web portal, I could now see my server:

Supratim Sanyal's Blog: eHorus Portal (SANYALnet Labs)
eHorus Portal (internet web site) with one server

eHorus provides the following options for command and control of configured servers:

  • Terminal
  • Desktop,
  • Processes
  • Services
  • Files.

Supratim Sanyal's Blog: eHorus Details Screen (SANYALnet Labs)
eHorus Node Details Screen at Web Portal

eHorus integrates with Pandora FMS enabling seamless monitoring and control facilities for nodes from right inside the Pandora FMS web UI. Here is an example of a eHorus terminal window inside a Pandora FMS web session:

Supratim Sanyal's Blog: eHorus Details Screen (SANYALnet Labs)
 eHorus terminal inside Pandora FMS
I will gradually deploy eHorus remote management agents on some of my other nodes. Unfortunately, the eHorus agent is not available for OpenVMS VAX or Alpha, Solaris, AIX, NetBSD and similar unusual operating systems that I play around with.

Wednesday, September 26, 2018

Establish SSH connection to OpenVMS Alpha 8.3 + TCP/IP Services 5.6 on DEC AlphaServer | Getting Past diffie-hellman-group1-sha1 and ssh-dss for Legacy Operating Systems


This post falls in the "don't reinvent the wheel" category.

One of my toys is RAPTOR, an emulated AlphaServer ES40 running OpenVMS Alpha 8.3 operating system. It connects to HECnet over DECnet Phase IV, and to the internet using Digital TCP/IP Services for OpenVMS. It runs an internet-facing web-server (OSU DECthreads HTTP Server for OpenVMS), effortlessly handling legitimate and spam traffic serving

Digital/Compaq/HP TCP/IP Services for OpenVMS Alpha 5.6 includes a SSH server allowing network access using SSL from SSH clients.


  HP TCP/IP Services for OpenVMS Alpha Version V5.6
  on an AlphaServer ES40 833 MHz running OpenVMS V8.3

Due to the age of TCP/IP Services for OpenVMS Alpha Version V5.6, modern implementations of SSH clients do not directly establish a secure communications channel with RAPTOR. Ubuntu 17 Linux, for example, provides the following contemporary SSH client:

someuser@moksha:~$ ssh -V
OpenSSH_7.5p1 Ubuntu-10ubuntu0.1, OpenSSL 1.0.2g  1 Mar 2016

and attempting to ssh directly to RAPTOR produces the following error:

someuser@moksha:~$ ssh vmsuser@
Unable to negotiate with port 22: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1

Looking at the OpenSSH Legacy Options page, I created a ~/.ssh/config file with the following contents:

        KexAlgorithms +diffie-hellman-group1-sha1

I set the file permissions for ~/.ssh/config to owner-read/write only (not sure if it is needed), and tried again. But this time, a different error showed up:

someuser@moksha:~$ chmod 600 ~/.ssh/config
someuser@moksha:~$ ls -l ~/.ssh/config
-rw------- 1 someuser somegroup 88 Sep 26 02:17 /home/someuser/.ssh/config

someuser@moksha:~$ ssh vmsuser@
Unable to negotiate with port 22: no matching host key type found. Their offer: ssh-dss

Looking more at the OpenSSH Legacy Options page, I added another line to ~/.ssh/config file so that the ~/.ssh/config now has a total of three lines in it:

        KexAlgorithms +diffie-hellman-group1-sha1
        HostKeyAlgorithms +ssh-dss

And presto, I am able to ssh from Ubuntu 17 into OpenVMS Alpha!

someuser@moksha:~$ ssh vmsuser@
The authenticity of host ' (' can't be established.
DSA key fingerprint is SHA256:somestring/somestring.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '' (DSA) to the list of known hosts.
vmsuser@'s password:

Welcome to OpenVMS (TM) Alpha Operating System, Version V8.3

System: RAPTOR, AlphaServer ES40 833 MHz
CPU 0    State: RUN                CPUDB: 81C16000     Handle: * None *
       Process: VMSUSER              PID: 000000B9

Product:  DECNET        Node:  RAPTOR               Address(es):  1.558
Product:  TCP/IP        Node:  raptor.sanyalnet.lan Address(es):

  26-SEP-2018 02:07:25

$ lo

Connection to closed.SEP-2018 02:09:35.51


Saturday, September 22, 2018

Running AIX x86 on Laptop | IBM AIX PS/2 1.3 for Intel i386 in Virtual Box

Supratim Sanyal's Blog: IBM AIX PS/2 1.3 for Intel i386 running X11 X Windows Motif Desktop in Virtual Box

AIX 1.3 for PS/2 is unique in that it is the only AIX release that runs on the Intel i386 processor architecture. IBM's announcement letter is still available online and starts off by describing AIX 1.3 for PS/2 as "AIX PS/2 Operating System Version 1.3 and its associated Conditions of Use Products (COUs) provide full hardware support and exploitation for all models of IBM PS/2 system units based on the 32-bit INTEL 386sx-16MHz up through the INTEL 486DX2-66MHz, utilizing both IBM Microchannel or IBM AT-Bus architectures."

As a DEC alumnus, the only IBM operating system I had ever used was PC DOS. This was by choice at the very beginning of my tryst with computing. DEC hardware and operating systems were being used in all sorts of interesting factory shop floor real-time systems, SCADAs, Nuclear Power Plants, Space technologies, Telecommunications etc. while IBM mainframes and minicomputers were more popular in (boring!) banking and financial systems.

I have since come to regret that unfounded bias, and when my favorite blogger posted an article on running AIX 1.3 inside VirtualBox I jumped on it and got it to work on my Lenovo Legion Y720 gaming laptop.

And, I also learned "AIX" actually stands for "Advanced Interactive Executive".

Supratim Sanyal's Blog: Running IBM AIX Operating System on PC Virtual Box - Graphical Desktop X11 X-Windows Motif

AIX for PS/2 supports a X Windows Motif based graphical desktop. A quick way to check the X11 desktop is to type in "xinit" which launches a X11/Motif graphical interface with a terminal, and then type in "xdt" to launch the IBM Graphical Desktop. The complete AIX for PS/2 X Windows Users' Guide is still available online.

The virtual machine boots up from floppy disks. Two boot floppy disks are needed. Booting from the first floppy disk loads the boot loader (IBM AIX PS/2 Bootstrap) itself:

SANYALnet Labs | IBM AIX boot sequence in VirtualBox

SANYALnet Labs | IBM AIX PS/2 PC Intel i386 Boot

On the next "LOAD A SYSTEM FROM THE DISKETTE" screen, the correct operating system choices need to be made:

Supratim Sanyal's Blog: IBM AIX PS/2 Intel i386 PC Boot

Module to be loaded: unix.gen
System mode: Multi User
Run system from hard disk: Yes

Proceeding from here, the Bootstrap will ask for the 2nd floppy disk to be inserted and continue booting AIX from there.

Supratim Sanyal's Blog: IBM AIX PS/2 PC Virtual Box Boot
Soon, a IBM AIX PS/2 Operating System login prompt is presented.

Supratim Sanyal's Blog: IBM AIX PS/2 PC i386 Intel Operating System Login
The X Windows/Motif graphical desktop can be launched using the "xinit" command after logging in. This launches the GUI desktop with a shell command prompt window. Issuing "xdt" launches the IBM AIX PS/2 AIXwindows Desktop.

In addition to the X Windows programs in /usr/bin/X11, additional AIXwindows software applications like "aixterm" are included. 

Unfortunately I have not been able to get networking to work yet. The AIX PS/2 announcement lists the following communication adapters as supported:

IBM PS/2 Adapter/A for Ethernet Networks (#0789)(6451233)
IBM Token Ring Network 16/4 Adapter/A (#1049)(74F9410)
IBM Token Ring Network 16/4 Adapter II
IBM Token Ring Network 16/4 Busmaster Server Adapter/A (#4041)(74F4140)

I have been unable to present any of this to AIX PS/2 in the VirtualBox hypervisor and will gladly welcome ideas to put AIX on the network in comments you can leave below.


You can download the Oracle VirtualBox appliance for hobbyist use only from my google drive.

Friday, May 4, 2018

A Free Public VDE (Virtual Distributed Ethernet) Switch: Connect anything to anything anywhere over layer-2 ethernet

The Public VDE Networking server at Università di Bologna does not seem to be up, so I deployed my own in the spirit of that original effort. It is open-access, public, available to everybody.

It allows any Virtual Distributed Ethernet (VDE) Switches anywhere to be connected securely over the internet.

To connect to my free open access VDE public ethernet network, just virtually "wire" your switch to my public one using this command:

dpipe vde_plug = ssh vde_plug

I am using this VDE switch to connect a VAX-11/780 in Kitchener, Ontario, Canada to a bunch of DECnet nodes in the Washington DC metro area. The exact commands I am using to set up the local VDE switches and connect them via the public VDE switch are:

/usr/local/bin/vde_switch -t vde-decnet-tap0 -s /tmp/vde-decnet.ctl -m 666 --mgmt /tmp/vde-decnet.mgmt --mgmtmode 666 --daemon --fstp

/usr/local/bin/dpipe /usr/local/bin/vde_plug /tmp/vde-decnet.ctl = /usr/bin/ssh vde_plug

The second command line runs in the foreground in the terminal unless you force it background using screen or nohup etc.

Also, the above command lines work on CentOS 7 on which I built VDE from sources. On Ubuntu, you can simply install vde2 from the repos which puts the tools in /usr/bin instead of /usr/local/bin.

If possible, please enable FSTP when you create your local VDE switches (use the --fstp parameter in the vde_switch command line) to try to control ethernet loopbacks and floods so that I don't have to keep rebooting my server.

Recommended Products from Amazon