Convert a X.509 (PKI) certificate to GPG

I see this question a lot all over the web, and it results in a lot of people saying it can’t be done.  However, this wasn’t the case back when PGP was owned by McAfee and had an official windows client.  Back then it was simply right click on the file and import it.

Heck, you could even create a CSR for your PGP key and have a CA create a signature for it, but only 2 CAs would do it, Thawte (before they were bought out by Verisign) and CAcert.  As far as I know they have both removed this functionality.

PGP.com has gone by the wayside, and many CAs no longer even offer s/mime email certs, which is a shame.  However, as this post will show, PKI and GPG suffer from major usability issues. In fact, the only successful public-key cryptography is https, and that is too hard for most admins to do well. U2F seems to be doing well, but I have GRAVE concerns about its long term security, but I have a bunch of specs to pour over before I make my concerns public.

Earlier this week, I used 2 different providers to get email certificates. One generated a private key for me, and sent it to me via email, along with an email of the password.  As Ben Franklin said “three can keep a secret, if two of them are dead”, because a shared secret really isn’t a secret.  Email isn’t secure, it really never has been, so sending me secrets unencrypted is not really useful.

I need to generate my own secret, and ask for a signature, this is the whole point of a CSR.  In the old days you could tell your browser to generate a certificate using a JavaScript API, apparently that has now been deprecated and only works in Internet Explorer.  Even though this makes sense, because getting true random numbers is hard, it makes for the first pain point in using public-key cryptography.

I finally found a company that would issue me my email certificate using the Sectigo CA, and they “validated my identity” … NOT.  They validated my email, took a copy my drivers license, over unencrypted email. Then never once talked to me in person, never validated that the person who sent the email matched the email etc.  This is to be expected, I guess,  I have mentioned that online identity validation really isn’t, in my previous article.   To generate my own certificate, and have the sign it, I had to use IE, on Windows,  yuck.  Even Microsoft doesn’t want you using IE anymore.

Now I have a certificate, and I wanted to show this import to PGP, so let’s get started.  Gnu Privacy Guard (GPG), is the Linux default for working with PGP. For the purpose of this discussion, GPG, PGP, and OpenPGP are synonymous terms, though PGP was a commercial product, GPG was the Linux product, and OpenPGP is the standard and not really a product at all.

I am using Ubuntu 20.04 (focal). I downloaded kleopatra because it is a nice cross platform GUI for GnuPG, I am only using command line utilities, for the work and using kleopatra to view the results. The pem2openpgp tool is included in monkeysphere.

sudo apt install kleopatra monkeysphere -y

Steps

  • Break the pfx (p12) into pem files that can be used.  For some reason, GPG cant handle standard encoding.

    openssl pkcs12 -in sectigo.pfx -nokeys -out gpg-certs.pem
    openssl pkcs12 -in sectigo.pfx -nocerts -out gpg-key.pem
  •  Combine the keys into something GPG recognizes
    openssl pkcs12 -export -in gpg-certs.pem -inkey gpg-key.pem -out gpg-key.p12
  • Import into GPG
    gpgsm --import gpg-key.p12
  • At this point we have the p12 imported, and we can see it in Kleopatra, but we can’t use it for PGP operations.
    cat gpg-key.pem | PEM2OPENPGP_USAGE_FLAGS=authenticate pem2openpgp "Your Name <your@email.address>" > key.pgp
  • Now!!!! we have a pgp key and you import key.pgp into gpg and it will absolutely have the same key ask your certificate.
    gpg --import key.pgp

Now, if pull up kleopatra, you can see that you have a certificate AND a PGP key, if you delete EITHER of them and you delete the private key, you will see that BOTH private keys are removed, because it isn’t 2, but rather a single private key.

Remember to delete all these files, and to use strong passwords.  Ideally you would store these keys on a FIPS compliant smartcard that uses 2 factor authentication.

The fallacy of identity verification online.

I have done a lot of work for various companies, dealing in different industries, including financial and insurance. Everyone wants to validate peoples identity for various reasons, HIPPA, Payment Card, or just validating the person is who they claim for signatures.

The assumption I see over and over and over again is, that if the person on the other end of the website knows the answers to the questions based off a credit report, that they are who they claim to be. This assumption is wrong, dead wrong, it has always been wrong, and it has been made all the more so by the Equifax hack.

As if the hack wasn’t bad enough, there are companies out there that collect information about you and sell it. They know WAY more about you than you would like. In many cases companies that coalesce this data, know more about you than you ever imagine. This is why the Lexis Nexis breach was so bad.

So, how can any company that claims to assure identity actually do it? There might be exceptions, but my experience says nobody truly does. The claims of such companies look all the more repugnant when you see the entirety of the “assertion” is based on an entirely automated process, asking questions off a credit report, and how these claims stand up in court is beyond me. All it will take is a single person signing for a house online, through one of these places and all of it will come crashing down like the house of cards that it truly is.

It is hard enough to validate someones identity in person, fake IDs, bribery, notaries not doing their jobs… Honestly, right now, I can’t think of anyone that does a better job of asserting someones identity than corporations, that are involved in heath insurance, do for their employees. Heck, employers do a better job of validating their legitimate employees identity, than most Notaries Public.

CentOS Issues

Trying to setup CentOS, which isn’t my normal Linux, I found a number of issues.

      • Time drifts horribly. I tried installing the standard NTP package, but something about CentOS didn’t want to let it work properly. Since CentOS wants to use chrony, I guess I will use crony.
      • Webmin doesn’t correctly authenticate against the normal users.
      • Webmin doesn’t allow sudo users

To fix the time drift.  Unlike every other *nix, CentOS doesn’t use NTP, it uses something called “crony”.  To fix it follow these instructions.

To install webmin so that it works as cleanly as debian based distros:

      • sudo -i
      • yum -y update
      • yum -y install perl perl-Net-SSLeay openssl perl-IO-Tty perl-Encode-Detect
      • vi /etc/yum.repos.d/webmin.repo
      • then add the following block to the file

        [Webmin]
        name=Webmin Distribution Neutral
        #baseurl=http://download.webmin.com/download/yum
        mirrorlist=http://download.webmin.com/download/yum/mirrorlist
        enabled=1

      • rpm --import http://www.webmin.com/jcameron-key.asc
      • yum install -y webmin
      • vi /etc/webmin/miniserv.conf
      • add the following line to the file

        sudo=1

      • firewall-cmd --zone=public --add-port=10000/tcp --permanent
      • firewall-cmd --reload
      • chkconfig webmin on
      • service webmin restart

Let’s break this down,

“sudo -i” logs us in as root, dangerous but you will need to sudo everything if you don’t, so you might as well just login as root.

yum update , updates all the software on your machine.

yum install perl  perl-Net-SSLeay openssl perl-IO-Tty perl-Encode-Detect, installs all the prerequisites to enable logging in using unix users.   Why this isn’t marked as requirements in the RPM, I have no idea.

Creating the webmin.repo allows webmin to be installed, and kept up to date via yum.

The rpm import statement is there to get the GPG key that the software package is signed with.  This allows yum to validate that the software install package is what the publisher created. In truth, it is actually rpm that does the actual verification and install, while yum is used to check for updates and downloading the update.

Modification of the miniserv.conf file is essential to let users that can sudo to be able to login, otherwise you can only log in as root.

The firewall rules reconfigure firewalld to allow access to webmin, and reload the configuration.

The chkconfig, enables the service to start automatically on boot.

Finally, the “service restart” command starts, or restarts, webmin.

Pengdows.CRUD

This very helpful wrapper over ADO.NET has been released to NuGet free of charge and will soon be released as open source. It is now and will forever be free to use.

At the moment, I have only included the .NET 4.0 binary for the moment, this will be remedied soon. For the moment, create a .NET 4 application.

Here is some example code with comments on basic functionality, just to jump start you.

//create a context to the database, this will allow us
// to create objects of the correct type from the factory
// as well as find out about things like quote prefix and
// suffixes. You may either choose a connectionstring and
// provider name, or simply pass they "name" of a
// connectionString entry in the config file.
var context = new DatabaseContext("dsn");

//create a container for the SQL and parameters etc.
 var sc = context.CreateSQLContainer();

//write any sql, I am making sure to create it using
//the providers quotes. The SQLText property is a
//StringBuilder, allowing for complext string manipulation
sc.SQLText.AppendFormat(@"SELECT {0}CategoryID{1}
  ,{0}CategoryName{1}
  ,{0}Description{1}
  ,{0}Picture{1}
 FROM {0}Categories{1}
 WHERE {0}CategoryID{1}=", context.QuotePrefix, context.QuoteSuffix);

//create a parameter, automattically generating a name
//and attaching it to sqlcontainer
var p = sc.AddWithValue(DbType.Int32, 7);

//append the name of the parameter to the SQL string
//if the provider only supports positional parameters
// that will be used. However, if named parameters are
// supported, the proper prefixing will be used with the
//name. For example, @parameter for SQL Server, and
// arameterName for Oracle.
sc.SQLText.AppendFormat(context.SQLParameterName(p));

//write the resulting SQL for examination by the programmer
Debug.WriteLine(sc.SQLText);

// get a datatable
var dt = sc.ExecuteDataTable();

// get the first row of the datatable
var row = dt.GetFirstRow();

//loop through and output all the data to the screen.
foreach (var itm in dt.Columns.OfType())
{
     Console.WriteLine("{0}: {1}", itm.ColumnName, row[itm]);
}

So this is easy, but why would you want to use this? What does it provide over plain ADO.NET, or EnterpriseBlocks?

Here are some of the benefits.

  • Self-contained blocks for execution.
    • SQLContainers – know which database to execute against
      • Carry the SQL and parameters in 1 encapsulated object
      • Adds “ExecuteDataSet” and “ExecuteDataTable” functions, making getting disconnected DataSet and DataTable objects easy. Also, exposing the DataTable, eliminates the overhead of always getting a DataSet, when only a single table is needed.
      • Changes the default on DbDataReaders to automatically close the connection upon closing of the object, based the ConnectionMode.
    • DatabaseContext – Encapsulates much of the programming people skip
      • Using a factory to create the connections
      • Interrogates the provider to determine
        • If there is support for named parameters
        • What the quoting characters are, defaulting to SQL-92 standard double-quotes ( ” ).
        • If there is support for stored procedures.
        • What is the named parameter indicator (such as an @ for SQL Server or : for Oracle).
        • Validates connection string
        • Will automatically read from the “ConnectionStrings” area of the app.config or web.config
        • Allows you to specify connection mode
          • Standard – uses connection pooling, asking for a new connection each time a statement is executed, unless a transaction is being used.
          • SingleConnection – funnels everything through a single connection, useful for databases that only allow a single connection.
          • SqlCe – keeps a single connection open all the time, using it for all write access, while allowing many read-only connections. This prevents the database being unloaded and keeping within the rule of only having a single write connection open.
          • SqlExpressUserMode – The same as “Standard”, however it keeps 1 connection open to prevent unloading of the database. This is useful for the new localDb feature in SQL Express.
        • Sets up SQL Server connections with the proper options to support Indexed Views.
        • Homogenizes connection strings using the DbConnectionStringBuilder class.

Building a BeagleBone Firewall: Part 6

Now we are ready to plug in the USB ethernet connector. I prefer to make this ethernet connection, the connector to your internet provider, but there is nothing to say you can’t make it your LAN connection.

If you don’t have any previous networking experience, LAN means Local Area Network, this is what will be behind your firewall, hidden and protected from the outside world. The way we are going to setup the firewall, all the computers behind it will look like a single computer. By contrast, the internet is a Wide Area Network, or WAN for short.

Make sure your USB ethernet adapter is plugged in, and run the following command.

lsusb

You should see something like the following for the output

Bus 001 Device 004: ID 1267:0103 Logic3 / SpectraVideo plc G-720 Keyboard
Bus 001 Device 003: ID 0b95:7720 ASIX Electronics Corp. AX88772
Bus 001 Device 002: ID 2109:2811  
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

In my case, that ASIX Electronics Corp. line is my USB ethernet. This is very good, this means I don’t have to compile a new linux kernel module for it. Now, we want to see a little more information about it. Enter the following into the console.

ifconfig

And you will get something like the following for output

eth0      Link encap:Ethernet  HWaddr d0:39:72:54:4d:e7  
          inet addr:10.0.1.1  Bcast:10.0.1.255  Mask:255.255.255.0
          inet6 addr: fe80::d239:72ff:fe54:4de7/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:22296295 errors:0 dropped:118 overruns:0 frame:0
          TX packets:32827682 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1367972463 (1.3 GB)  TX bytes:2528887318 (2.5 GB)
          Interrupt:40 


lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:1644 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1644 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:139327 (139.3 KB)  TX bytes:139327 (139.3 KB)

usb0      Link encap:Ethernet  HWaddr ba:67:28:61:85:ea  
          inet addr:192.168.7.2  Bcast:192.168.7.3  Mask:255.255.255.252
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

rename3   Link encap:Ethernet  HWaddr b6:c3:97:fe:20:c0  
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:34242743 errors:593 dropped:0 overruns:0 frame:593
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)    

This shows our network connections.  “usb0” is NOT our usb to ethernet adapter, rather it is something that is pre-configured in our ubuntu distribution for the beaglebone, I must admit, I am not 100% sure what good it is. “rename3” is the item we are looking for,  as you can see we need to setup an IP for the adapter, and the name of “rename3” is rather obnoxious.  To satisfy my inner “Monk“, and because I am a lazy typist, I want to make the name shorter and more meaningful, thus we will rename the adapter to reflect its purpose “wan0”.

We will need to take note of the HWaddr, which is the MAC address, so we can edit the next file to rename the adapter. To do this renaming, open the file using your text editor of choice/


sudo nano /etc/udev/rules.d/70-persistent-net.rules

You should see a file that looks like



# Auto generated by RootStock-NG: setup_sdcard.sh
# udevadm info -q all -p /sys/class/net/eth0 --attribute-walk

# BeagleBone: net device ()
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"

At the end of the file, add the following line, making sure to replace the MAC address from your adapter.


# USB device 0x:0x (AX88772)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="b6:c3:97:fe:20:c0", NAME="wan0"

Essentially, we are adding a device to the net(working) subsystem, uniquely identifying it by the MAC address (you did remember to change it to yours, right?).  Save the file, and do a clean reboot with the following command

sudo reboot

When the it finishes the reboot, run the ifconfig command again, to verify the adapter was correctly renamed.

Now, we need to setup the adapter to retrieve an IP address from your ISP, to do that we need to edit /etc/network/interfaces

sudo nano /etc/network/interfaces

Now add the following at the bottom of the file

# The WAN(internet) network interface
auto wan0
iface wan0 inet dhcp

Like many scripts and configurations in the Unix world the “#” tells the system to ignore this line, so you can put in stuff that is meaningful to you.  Programmer call these lines “comments”.

auto wan0 tells the system to bring up this network interface upon boot.

iface wan0 inet dhcp tells the system, for interface (iface) wan0, get a version 4 internet (inet)  address from DHCP.  Because my ISP doesn’t support IPv6, I won’t set that up right now.  If you have a static IP from your ISP, or want to do additional things, please refer to the debian documentation.

Now we need to setup a very minimal firewall, so it is safe for us to connect to the internet, and make sure all these changes work.  That will be part 7.

Building a BeagleBone Firewall: Part 5

At this point we have a pretty nice little linux box, quite acceptable for doing many things. If we add a stateful firewall, it would make an acceptable kiosk machine.

However, we have a pretty big security hole, we should fix right now. You see, the image we used to put linux on the eMMC and microSD card have pre-installed SSH keys. Which means, every single machine that is installed with these images have the exact same set of public and private keys.  If you don’t understand what that means, that is ok, but suffice to say, if we don’t fix it there is a major security hole.  So lets fix it.

First we want to remove all the old host keys, but not the config files, so from the console issue the following command.

sudo rm -rf /etc/ssh/ssh_host_*

Now, we will want to generate, the new keys.

sudo dpkg-reconfigure openssh-server

Finally, we need to restart the ssh server.

sudo service ssh restart

For more info on why we want to do this, read this.  I highly recommend that you shut the BeagleBone done, pop out the microSD card, boot from the eMMC (simply boot without the microSD) and repeat this process on that OS as well.  Of course, when you are done, shutdown the BeagleBone, put it the microSD back in, and boot back up.

Next up, we will configure the usb to ethernet adapter.

Building a BeagleBone Firewall: Part 4

One of the main reasons for building this device, it to make sure the software is updated (patched) regularly.   I have a multifaceted strategy to do that.

Before we go further, it is a good time to decide if you want XWindows (or simply X) on your firewall.  X makes, using the machine and configuring it more friendly.  Just like anything else though, the more software you have installed, the more software can be exploited.   If you wish to remove X and all its components it is easiest to run the following at the command line, then get a cup of coffee, this will take a while.

sudo apt-get purge libx11.* libqt.* libgd3 -y

If this fails because libgd3 isn’t installed repeat the command without it.

If you choose to keep X, it is helpful to be able to get to it remotely, you can do this via the Microsoft Remote Desktop Protocol, or VNC by adding a single package.  To install this package, “xrdp”, run the following command

sudo apt-get install xrdp -y

Now lets update all the software on the machine. Updating the software on an Ubuntu or Debian machine is really easy.

Make sure your machine is connected to the internet. Get to a command line, like the console, via SSH, or using something like xterm or terminal. Then type the following command and hit enter, then put in your password, so you get root access.

sudo apt-get update;sudo apt-get dist-upgrade -y;sudo -y --purge autoremove;sudo reboot

So lets explain this a little bit.  The semicolons separate the commands.  In Debian based systems, apt-get is the basic command to work with software packages. There is a huge library of available software available for free, and like the Google Play store, they are able to be installed, removed, and updated using this command, or one of the many wrappers over it.

apt-get update

Updates the local copy of what is available, and versions.

apt-get dist-upgrade -y

“dist-upgrade” tells apt-get to install all software updates, and the “-y” says, “just answer yes”.

apt-get -y --purge autoremove

“autoremove” tells apt-get to remove all software packages that are no longer needed.  “-y” again means answer yes “–purge” says to remove all associated config files, leaving the system squeaky clean.

sudo reboot

For the most part, this isn’t necessary, only a kernel upgrade truly requires a reboot.

But what about automating the updates, so they happen in timely basis, it is a pain to login every day, run these commands, and reboot if necessary.  There is a package in Debian systems that will automatically  install all security updates called “unattended-upgrades”, so lets install it.  Go to the command line again, and install the package by typing the following command.

sudo apt-get install unattended-upgrades -y

Hopefully, you will get a message that says it is already installed, then, use the following command to configure the package to automatically install all the updates, with this command

sudo dpkg-reconfigure -plow unattended-upgrades

However, this will neither reboot the machine when an updates requires it, nor will it remove unused packages, nor will it install non-security updates.  Also, “autoremove” isn’t terribly efficient at removing unneeded software packages.

There is a package called “deborphan”, it will find unused packages, and can be used in combination with apt-get to help keep things clean. The following command, will show you all software packages that don’t really need to be installed, we will make more use of this in a moment.


deborphan --guess-all

So lets make some scripts to help keep things clean.  Lets start with removing old kernels.  Old kernels can take up a huge amount of space.  However, we do not want to remove the kernel we are using, so borrowing from another page as a starting point, we get the following command.  I did add the “grep -v `uname -r`” because the original command did have a problem of removing ALL kernels from the system (which is a great reason to have the backup OS installed on the eMMC).  If you are not familiar with Unix editors like emacs, vim or vi, I suggest you use nano to create follow files.  This first file will be “remove-old-kernels.sh”.  To create it using nano, use the following command:

sudo nano /bin/remove-old-kernels.sh

then copy the following text into the file.

dpkg -l 'linux-image-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d' | grep -v `uname -r` | xargs sudo apt-get -y purge

A short explanation of the above command, is as follows. The “dpkg -l” portion lists all installed kernels.   The two “sed” pipes grab the linux kernel versions from the installed list. The “grep -v” returns the list of installed kernels EXCEPT the one that is currently being used. “xargs” turns it all into an argument list, and finally “apt-get -y purge” removes everything in that argument list.

So our first cleanup script will be called “remove-old-kernels.sh” and, for lack of a better place, we will put it in the “/bin/” folder.

dpkg -l 'linux-image-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d' | grep -v `uname -r` | xargs sudo apt-get -y purge" 

next we need to make our autoupdate.sh script, so create it like you did the remove-old-kernels.sh script

sudo nano /bin/autoupdate.sh

and again copy the code

apt-get update
apt-get dist-upgrade -y
apt-get autoremove -y --purge
apt-get autoclean
apt-get purge -y $(deborphan --guess-all)

Lastly, we need a script to run the first two, then reboot, we will call it ‘autoupdate-and-reboot.sh’.

sudo nano /bin/autoupdate-and-reboot.sh

here is the code

/bin/remove-old-kernels.sh
/bin/autoupdate.sh
/sbin/reboot

Now, we have three scripts that can be used to keep the system squeaky clean, and updated. Of course, none of these neat little scripts will work until we tell linux that they should be able to be executed. So enter the following command which will do just that.

sudo chmod +x /bin/*.sh

Yes, you could list out the files individually, but since there shouldn’t be any other .sh files in the freshly built machine, I am not worried about accidentally making a rogue script executable.

You can now run the “autoupdate-and-reboot.sh” script anytime you like, to update all the software, and reboot the machine. Or add it to a cron job, to make sure it is kept up-to-date.

Building a BeagleBone Firewall: Part 3

We have now finished flashing the eMMC (built in smartcard) on the BeagleBone from Debian to Ubuntu. Next we will make a microSD card also boot Ubuntu.

I was asked by a reader, “why did we flash the eMMC if we are going to use the microSD the drive our firewall runs on?”    My previous explanation apparently wasn’t as clear as I intended, so I will try to be more succinct.  So here are my reasons:

  1. I am no longer fond of the official Debian installation
  2. I do not wish to have the distribution on the eMMC be different than the one on the microSD
  3. I don’t want to wear out the eMMC, so I wish to use it as a recovery option, rather than the main OS drive, after all a microSD is very easy to replace.

Back to building the firewall.

Download the image for the microSD card

wget https://rcn-ee.net/deb/microsd/trusty/bone-ubuntu-14.04-console-armhf-2014-08-13-2gb.img.xz

The MD5 sum is 3a5c1d6e85e3b9d7c2f9133fa6197097 should you wish to check it.

Flash the card like before, using dd or other image writer. We can simply write over the top of the card we used for flashing the eMMC, because that was only needed the 1 time.

Once this is done, hook up your keyboard, monitor, mouse, USB network adapter,  place the microSD into the BeagleBone, and power it up.

Now it is time to make sure the software is up-to-date.

 

Building a BeagleBone Firewall: Part 2

Since writing part one, this article was brought to my attention, it compares the Arduino, Raspberry Pi, Intel Galileo, and BeagleBone Black.   It pretty much shows the computing power of the BeagleBone and price are unbeatable.

I am affectionately calling this project “BeagleWall” for lack of a better term.  If you haven’t checked out the shopping list, and want to follow along, I suggest you do so.  BeagleBones are often back-ordered.

So lets get started. First, we don’t really have to install a new OS.  The BeagleBone comes with Debian pre-installed on its 4GB of eMMC (think of this as a permanent jump drive built onto the board).  I have a few reservations about using this as the main OS drive though.

When doing my research about flashing a new OS to the eMMC, I found that one of the errors that could happen was a result of “too many writes”.   Flashing is the term used to refer to re-writing the preloaded memory, such as your phone’s operating system,  computer BIOS, wireless router firmware or similar devices, so that the functionality is somehow changed.  An example of flashing is when you upgrade an iPod/iPad/iPhone to a new version of iOS or your carrier updates your phone to a new version of Android.  As if the “too many writes” error weren’t enough of a reason, when I booted from a microSD, I wasn’t able to mount the eMMC in write mode, which means no recovery without re-flashing the whole OS.

With that in mind, and the fact that I prefer Ubuntu, I decided the best course of action was to put Ubuntu on the eMMC and use a microSD card, that way if things fail to boot, it will still boot from eMMC, and allow me to mount the microSD and recover.

For something like a firewall or other server, I would never suggest anything other than the “Long Term Support” (LTS) releases from Ubuntu.

If you aren’t familiar with Ubuntu you might get confused by the names and version numbers.  Ubuntu names its releases with alliterations and animal names, such as Karmic Koala, Lucid Lynx, Precise Pangolin, and they are doing it alphabetically (or have been since Dapper Drake).  They number the releases by year and month of release.  Thus 12.04 was named Precise Pangolin, and released in April of 2012.

At the time of this writing, Trusty Tahr (14.04) is the most recent release of  release of Ubuntu with Long Term Support, released in April 2014, and specifically sub-release one, so it is numbered 14.04.1, and will be supported until April 2019.

I would prefer to have the same OS on both the eMMC, and the microSD card, for my own sanity.  Because flashing the eMMC is done from the microSD, we will do that first.

First, go download the image you can do it with wget if you are using Linux.   The MD5 sum is 06f12f0168946cf302e2f6b32e07e007, if you wish to validate the integrity of the file.

wget https://rcn-ee.net/deb/flasher/trusty/BBB-eMMC-flasher-ubuntu-14.04-console-armhf-2014-08-13-2gb.img.xz

If you are using Windows, you will have to unzip the file using 7-zip.

Write the image to your microSD card using dd, or something like Win32DiskImager.   To use do this on my Linux machine, it looks like

dd if=BBB-eMMC-flasher-ubuntu-14.04-console-armhf-2014-08-13-2gb.img of=/dev/sdb

For the next part you don’t even need to hook up a monitor.  Just plug in the freshly flashed microSD into your BeagleBone, and add power.  The lights will flash sequentially, and then all come on solid.  Once that happens, disconnect the power, remove the microSD.  That is all there is to flashing the new OS to the eMMC.

At this point, I suggest booting up the new OS.  Simply connect, the monitor, keyboard and mouse, and apply power.

The default username and password are

Username ubuntu
Password temppwd

If you want, you can change the default password, sign-in and change the password.  You can also become root by using the command

sudo -i

It will prompt your for your password, and then you can use the same steps to change the root password.

Now it is time to make the microSD card in to an Ubuntu system, this is where we are going to put our firewall.