Adventures Automated the Embedded 1: Tunneling Ansible through Multi-Hop SSH Proxy Environments

A usage scenario:

In my case, tunneling from a server through remote gateway(s) which handles the routing for a remote network consisting of embedded linux nodes, which are bridged together in strings of a daisy chain topology using STP, linked by fiber backbone of switches down to the gateways. In case you’re wondering, the nodes span across a bridge.

The two gateways are different networks with 2 separate internet up links. So it’s fair to say I’m dealing with a very unconventional network, that I have to make work.

The reason ansible was chosen for automating these nodes is because the aspect of being able to use it both for DevOps ansible-playbook stuff, and also as a substitute for parallel-ssh since I’m a sysadmin. This makes it easier to maintain the large scale I’m dealing with by simply dividing up separate networked sections into inventory child groups.

The setup and maintenance of the server is also deployed by ansible, as we thought it fitting to use the same tools for the server side of our infrastructure as we use for the embedded, but to manually configure this is the general approach.

Consider the following:

Node Control Server(Ansible + OpenVPN) —> Gateways —> Nodes

On server:

Set your ansible cfg to use ‘ssh’ as transport.

Ansible.cfg:

[defaults]

# some basic default values…

hostfile = /home/user/ansible/hosts-all
library = /usr/share/ansible
remote_tmp = $HOME/.ansible/tmp
pattern = *
forks = 10
poll_interval = 15
sudo_user = root
#ask_sudo_pass = True
#ask_pass = True
transport = ssh
remote_port = 22
….

[ssh_connection]

# ssh arguments to use
# Leaving off ControlPersist will result in poor performance, so use
# paramiko on older platforms rather than removing it

# SSH arguments tweaked to work better with my network
ssh_args = -o ControlMaster=no -o ServerAliveInterval=30 -o ServerAliveCountMax=4

# if True, make ansible use scp if the connection type is ssh
# (default is sftp)
scp_if_ssh = True

——————————————————————-

In my case I didn’t want strict host key checking, because that was already done on the gateway level, and the embedded linux nodes I’m managing don’t have user space and have the same key settings (so it’s logging in as root anyway). Worth while mentioning that in my use case the gateway link were VPN tunnels.

/etc/ssh_config or ~/.ssh/config

Host *.firstdomain
ProxyCommand ssh user@gateway-hostname1 nc %h %p
IdentityFile ~/.ssh/id_rsa_gw1
StrictHostKeyChecking=no
User root

Host *.seconddomain
ProxyCommand ssh user@gateway-hostname2 nc %h %p
IdentityFile ~/.ssh/id_rsa_gw2
StrictHostKeyChecking=no
User root

——————————————————————-

Then create your ansible inventory file as such:

ansibleuser@node-control-server:~$ ansible/hosts-all
[production:children]
childgroup1
childgroup2
childgroup3
childgroup4

….

[childgroup1]

node124 ansible_ssh_host=node124.firstdomain
node128 ansible_ssh_host=node128.firstdomain
node129 ansible_ssh_host=node129.firstdomain
node130 ansible_ssh_host=node130.firstdomain
node113 ansible_ssh_host=node113.firstdomain
….

[childgroup2]

node110 ansible_ssh_host=node110.firstdomain
node119 ansible_ssh_host=node119.firstdomain
node133 ansible_ssh_host=node133.firstdomain
node117 ansible_ssh_host=node117.firstdomain
node135 ansible_ssh_host=node135.firstdomain

[childgroup3]

node582 ansible_ssh_host=node582.seconddomain
node576 ansible_ssh_host=node576.seconddomain
node573 ansible_ssh_host=node573.seconddomain
node580 ansible_ssh_host=node580.seconddomain
node578 ansible_ssh_host=node578.seconddomain

[childgroup4]

node567 ansible_ssh_host=node567.seconddomain
node577 ansible_ssh_host=node577.seconddomain
node592 ansible_ssh_host=node592.seconddomain
node561 ansible_ssh_host=node561.seconddomain
node571 ansible_ssh_host=node571.seconddomain

——————————————————————

Gateways:

/etc/hosts should be maintained to handle node to IP resolution as such:

IP Address     NodeHostname    NodeHostname.<firstdomain|secondomain>.
i.e….

10.0.0.123     node123                node123.firstdomain

Testing:

ansibleuser@node-control:~$ ansible childgroup2 -m ping
node108 | success >> {
“changed”: false,
“ping”: “pong”
}

node116 | success >> {
“changed”: false,
“ping”: “pong”
}

node109 | success >> {
“changed”: false,
“ping”: “pong”
}

node120 | success >> {
“changed”: false,
“ping”: “pong”
}

node118 | success >> {
“changed”: false,
“ping”: “pong”
}

node106 | success >> {
“changed”: false,
“ping”: “pong”
}

node114 | success >> {
“changed”: false,
“ping”: “pong”
}

node115 | success >> {
“changed”: false,
“ping”: “pong”
}

 

… as you can see we have connectivity to the nodes, and the proxying automatically handled via SSH and ansible based on name resolution. You can now scale this over multiple sites by simply adding more gateways to the SSH config file and maintaining them in your ansible inventory of the nodes by hostname.

Install NVIDIA Optimus Drivers on Ubuntu 14.04

 

Edit: Rather than having duplicate content on different sites I’ve opted to just link to my article on xmodulo.com because it drives me nuts when I find the matches on google to the same content on multiple sites.

http://xmodulo.com/2014/08/install-configure-nvidia-optimus-driver-ubuntu.html

Improved Jailed SFTP Creator for Web Server

So I decided to give this idea an overhaul after I was asked to find a way to create these accounts from PHP in the web server

I definitely would not go as far to say that I regard myself as a programmer, so there are probably things I’ve done in the script below that would make a veteran programmer cringe, but this is a massive improvement from the way it was as well as it really only being designed for my own usage.

The other new main requirement is that Apache can write to the new user’s SFTP directory so they can upload data, and get reporting on their services.

Because of security concerns it was decided that it wouldn’t be wise to allow a PHP script to be executing shell commands from the web server as root no matter how we went about it so the idea for the changes here are in relation to the next version creating a cron job to run the shell commands independently.

So the idea here is that there is a PHP script that when required will generate a simple file with the new username and password, and there will be a cron job running every few minutes looking for that file.

If there is the request to add a public key, this will search for it and add it as well.

It also generates an output file which contains the information for the new account, along with the bottom 5 lines of /var/log/secure to confirm that the permissions were applied correctly.

As said in previous post on the matter you must have your sshd_config setup as so:

Prerequisites:

OpenSSH 5 (I had to upgrade OpenSSH manually because CentOS 5 natively supports a previous version that wouldn’t suffice for this purpose). 

…if by chance you need to do the same on CentOS or another RPM distro that has the same limitation, follow the installation instructions in this guide:

Installing OpenSSH 5 in CentOS 5

Add group SFTP:

In the script below I added, but haven’t yet tested a few lines which checks to see if group exists and creates it but it’s not enabled by default. Otherwise simply run this command.

# groupadd sftponly


Configuring:

Locate your sshd_config file and open with your desired text editor.

Locate out the following line and comment out with a # if not already done so:

# Subsystem sftp /usr/libexec/openssh/sftp-server

Scroll down and append the following to the end:


Subsystem sftp internal-sftp         

Match Group sftponly         

ChrootDirectory  /home/%u         

ForceCommand internal-sftp         

AllowTcpForwarding no

The script:

Change the paths for  lusername, pubkey & output file

#!/bin/bash
## Creates user and necessary SFTP directory permissions
## Refined Version 16th June 2014 --Chris. Tested CentOS 5+6
## Change file paths edit in getPaths function to suit:
## *lusername:
## *pubkey:
## *outputFilePath:
## Note: Pre-Configuration for group sftponly in sshd_config
## for SFTP is necessary.
isRoot() {
    # Check if the current user is root or not !
    if [ $EUID = 0 ]; then
        return 0
    else
        return 1
    fi
}
getPaths() {
    lusername="/var/www/html/newsftp"
    outputFilePath="/home/admin/"
    pubkey="/var/www/html/newsftpkey"
    hostName=$(hostname)
}
findUsername() {
##path to user data file in format <username> <password>

    findUserPass=$(find $lusername)
    if [[ $findUserPass == $lusername ]]
    then
            echo "found username file"
    else
            echo "no username file match" && exit
    fi
}
checkSFTPonly() {
    mksftpGrp=$(cat /etc/group | grep sftponly | sed s/:/" "/g | awk '{print $1}')
    mksftpTrue=$(echo "sftponly")
    if [[ $mksftpGrp == $mksftpTrue ]]; then
        echo "group exists!"; else
        echo "group doesn't exist!"; /usr/sbin/groupadd sftponly;
        echo "group sftponly created"; fi
}

outputKeys() {
    outputFile="$outputFilePath"/"$user"_sftp
    echo "Your SFTP account details are: " >> $outputFile
    echo "Username: "$user"" >> $outputFile
    echo "Password: "$pass"" >> $outputFile
    echo "Directory for files: /sftp" >> $outputFile
    echo "Port: 222" >>  $outputFile
    echo "Example: sftp -oPort=222 "$user"@"$hostName":/sftp" >> $outputFile
    echo "Looking for public key in path "$pubkey""

    ##If pubkey is found at path it adds to .ssh/authorized_keys
    findPubkey=$(find "$pubkey")
    if [[ $findPubkey == $pubkey ]]
    then
            echo "pubkey found - adding to authorized_keys..";
            cat $pubkey  >> "/home/"$user"/.ssh/authorized_keys";
    else
            echo "pubkey not found - you will have to add manually" | tee -a $outputFile; exit
    fi
    ##Adds lines spacing to Account Info Output
    echo -e "Public key added to authorized_keys: " >> $outputFile;
    sed -i '0~1 a\ ' $outputFile;
    echo -e "$pub \n" >> $outputFile;
    echo -e "Log output to confirm\debug: \n"

    ##Copies Secure Log output lines CentOS\RHEL only!
    tail -5 /var/log/secure | tee -a $outputFile;
    echo -e "\n Account info can be found at $outputFile";
    mv $lusername /home/admin/sftp_"$user"
    mv $pubkey /home/admin/"$user"_key

}

assignUserPass() {
    pub=$(cat $pubkey)
    echo $pub
    ##Get user and pass variables
    getUserPass=$(cat "$lusername")
    user=$(echo $getUserPass | awk '{ print $1 }')
    pass=$(echo $getUserPass | awk '{ print $2 }')
    echo $user $pass
    ##Add user
    echo "creating account for "$user"..."
    /usr/sbin/useradd -m "$user" -p "$pass"
    ##Add to group 'sftponly'
    /usr/sbin/usermod -G sftponly $user
    ##Strip account of shell priv.
    /usr/sbin/usermod -s /bin/false $user
    ##Change owner ~/ directory to root
    chown root:root /home/$user
    ##Jail the user folder
    chmod 755 /home/$user/
    chmod 755 /home/$user
    ##Create directory for file transfers
    mkdir /home/$user/sftp
    echo "/home/"$user"/sftp directory created"
    ##Give user ownership sftponly group ownership
    chown $user:sftponly /home/$user/sftp
    ##Allow sftponly group to write and enable Apache to write to directory
    /usr/sbin/usermod -aG apache sftponly
    chmod g+w /home/$user/sftp
    ##Make and lock down .ssh directory & authorised keys file
    mkdir /home/$user/.ssh/
    touch /home/$user/.ssh/authorized_keys
    chown $user:$user /home/$user/.ssh
    chmod 700 /home/$user/.ssh/
    chmod 700 /home/$user/.ssh
    chown $user:$user /home/$user/.ssh/authorized_keys
    chmod 600 /home/$user/.ssh/authorized_keys
}

if [ isRoot = 0 ];
then
    echo "Not root. Exiting.." && exit
else
	    getPaths;findUsername;checkSFTPonly;assignUserPass;outputKeys
fi

Linux Script to Enable Randomization of MAC Address on Startup

Linux Script to Enable Randomization of MAC Address on Startup

Made up this little thing for use on Debian\Ubuntu based systems in order to spoof the address for wlan0 to a different address every start up.

So whether you’re trying to preserve anonymity by installing it on a device that you use a lot on public networks, or cover your tracks in the snow when up to mischief by putting it on your Kali install, it is pretty handy.

Worth to note that it’s probably not a wise idea to assign this to an interface that you use static IP addressing for as you’d have to change your MAC in the router settings every start up, in which case it’s better to create a fake MAC address and make it set it to that every start up instead of randomizing by changing:

macchanger -r wlan0 —to–> macchanger –mac=XX:XX:XX:XX:XX:XX wlan0

To enable for other devices besides wlan0 just uncomment out the lines corresponding to your desired NIC in the part of the script as follows:

  1. echo “
  2. #! bin/sh
  3.  
  4. ifconfig wlan0 down
  5. #ifconfig wlan1 down
  6. #ifconfig eth0 down
  7. #ifconfig eth1 down
  8.  
  9. macchanger -r wlan0
  10. #macchanger -r wlan1
  11. #macchanger -r eth0
  12. #macchanger -r eth1
  13.  
  14. ifconfig wlan0 up
  15. #ifconfig wlan1 up
  16. #ifconfig eth0  up
  17. #ifconfig eth1 up
  18.  
  19. ”  > /etc/init.d/macchangerstartup

Instructions:

  1. Copy script and save as macchanger.sh to your home directory.
  2. Open up a terminal. Mark file as executable:                 
      sudo chmod +x macchanger.sh
  3. Simply run as root:
      sudo bash macchanger.sh

To temporarily disable MAC address to default:

    sudo macchanger -p wlan0

To revert permanently:

    sudo update-rc.d macchangerstartup

    sudo rm /etc/init.d/macchangerstartup

 

Jailed chroot SFTP accounts CentOS 5 (dirty interactive script at end)

Say you’re running a webserver with API’s that offer specific services to customers, which as a result generates some form of valuable output file like reports etc. Or possibly the opposite situation, where frequent uploads of files such as a databases are necessary for the use of the webapp services. 

Despite these reports being generated and easily accessible for download via the webportal, as well as the ability for uploading necessary files to the webapps, there is always going to be a user that wants to be able to do this in a different manner.

And who can blame them when user is dealing with the personal information of their client base, and also wants to be able to achieve a form of file management that can be scripted and automated?

So how can we give them this access in a manner that restricts their ability to interact with the rest of the server? The answer is a jailed SFTP account.

Prerequisites:

OpenSSH 5 (I had to upgrade OpenSSH manually because CentOS 5 natively supports a previous version that wouldn’t suffice for this purpose). 

…if by chance you need to do the same on CentOS or another RPM distro that has the same limitation, follow the installation instructions in this guide:

Installing OpenSSH 5 in CentOS 5


Configuring:

Locate your sshd_config file and open with your desired text editor.

Locate out the following line and comment out with a # if not already done so:

# Subsystem sftp /usr/libexec/openssh/sftp-server

Scroll down and append the following to the end:


Subsystem sftp internal-sftp         

Match Group sftponly         

ChrootDirectory  /home/%u         

ForceCommand internal-sftp         

AllowTcpForwarding no

Restart the SSH service:

# /etc/init.d/sshd restart

…..(or # service ssh restart depending on distro)

Now we need to make an account and folder that a user can read and write to, yet with restricted permissions to the user home folder itself, no access to any other folders on the server and most of all NO SHELL ACCESS.

The best way to achieve this is to set up a jailed chroot, so when they connect via SFTP to /home/<user>/ it will appear to them as the root directory where they are not the owner and they cannot write to. This is particularly important because otherwise they could potentially alter SSH configuration files, such as the pubkeys – an obvious security risk.

Instead the folder /home/<user>/sftp/ (or to them /sftp) is created where they have ownership along with read and write permissions.

Firstly we need to create a group for the user called ‘sftponly’, just like we specified in the sshd config file:

# groupadd sftponly

Now if you’re lazy, feel free to run this interactive script I created that will do the rest. Simply copy the output below into a file, name it what you want, and run it as so.

# bash /path/to/script

***Remember to chmod +x the file first.

Then, simply follow the prompts. 

Otherwise you can just follow the command order in the script or alter to suit your fancy.

 

Be sure to generate and copy the pubkey onto the server if you want the script to add it automatically, otherwise you can do it manually later by copying it into ~/.ssh/authorized_keys file.

#!/usr/bin/env bash
# This script serves to create and configure new user accounts for SFTP
# Just enter in the details correctly when prompted and it'll do it all for you.
# You can thank me later ;) -- Your friendly admin.
#
# Note: Script must be run as root. Run as "bash SFTPcreator.sh"
# Note: Tested only on CentOS 5.9
# Note: Requires OpenSSH 5
# Pre-Config: Make sure the following is matched in sshd_config:
#
#
# " #Subsystem sftp /usr/libexec/openssh/sftp-server"
#
# ...as well as:
#
# " Subsystem sftp internal-sftp
# Match Group sftponly
# ChrootDirectory /home/%u
# ForceCommand internal-sftp
# AllowTcpForwarding no "
#

VERSION=’0.1′;
isRoot() {
# Check if the current user is root or not !
if [ $EUID = 0 ]; then
return 0
else
return 1
fi
}
makeSftpUser() {
read -p “Type in desired SFTP username \ home folder name: ” User
/usr/sbin/useradd -m $User
/usr/sbin/usermod -G sftponly $User
#restricts SSH shell access so they can SFTP but not SSH
/usr/sbin/usermod -s /bin/false $User
#Creates Password for account
read -p “Type in password for the account: ” Pass echo -n $Pass | passwd $User –stdin

clear
## Creates ~/$USER/sftp directory and sets correct privileges and
## permissions to allow SSH public key authentication. Also restricts
## users ability to modify their home directory, and grants full
## read\write access to their ~/$USER/sftp directory
chown root:root /home/$User
chmod 755 /home/$User/
chmod 755 /home/$User
mkdir /home/$User/sftp
chown $User:sftponly /home/$User/sftp
mkdir /home/$User/.ssh/
touch /home/$User/.ssh/authorized_keys
chown $User:$User /home/$User/.ssh
chmod 700 /home/$User/.ssh/
chmod 700 /home/$User/.ssh
chown $User:$User /home/$User/.ssh/authorized_keys
chmod 600 /home/$User/.ssh/authorized_keys
clear
addPubKeyYN
}
addPubKeyYN() {
echo “Note: You need to have a valid pubkey file installed”
echo “..and be able to specify the file path to continue”
echo ” ”
echo ” ”
read -p “Would you like to import a public key (y/n)? ” ANS
if [ “$ANS” = y ] ; then
clear
read -p “Can you please humour me and re-enter your new username?: ” User2
read -p “Please specify the path to pubkey file (or files using wildcards):” Path
cat $Path >> /home/$User2/.ssh/authorized_keys
clear && restartAndExit
else
clear
echo “If you selected no then you will need to add pubkeys manually” && sleep 5 && clear && restartAndExit
clear
fi
}
restartAndExit() {
##Restart sshd
echo “Restarting sshd service and exiting.”
/etc/init.d/sshd restart
reset
echo ” ”
echo “If all went well you can connect using the command: ”
echo ” ”
echo “sftp -oPort= [USERNAME]@:sftp/” && sleep 5
exit
}
##proceeds if root, exits if not
if [ isRoot = 0 ]; then
exit
else makeSftpUser
fi

Linux tools that I can’t live without since learning

Here are a few tools I have discovered that I now am not willing to live without using them on my Linux installations.

Tilda:

Tired of having to switch back and forth between terminal emulator window and browser constantly? Tilda can remedy this.

It’s an incredibly useful tool that allows you to configure a drop down terminal window that you activate with a hot key combination. You can also customise it’s dimensions, font size, colour and appearance etc. Transparency customisation is an option that I really appreciate.

In fact you can also customise it to run a specific command on opening instead of just opening the shell.

Installing:

sudo apt-get install tilda

XBindKeys:

Now this one I only discovered somewhat recently and has already become a convenience I’d hate to not have.

XBindKeys allows you to assign keyboard shortcuts in a bit more sophisticated way than the standard Ubuntu keyboard shortcuts package, but alone is not the reason I love it.

What this application allows you to do is actually assign keyboard shortcuts to input strings of text so if you have a command or an IP address or an email address etc that you find yourself having to type again and again give this program a go.

Installing:

sudo apt-get install xbindkeys

xbindkeys –defaults > home/username/.xbindkeysrc

sudo apt-get install xbindkeys-config xvkbd

xbindkeys

To configure it,  run (Alt + F2 and type) xbindkeys-config

After you assign a new shortcut name and hotkey combination, enter the following in the ‘action’ field to create your text input shortcut:

xvkbd -xsendevent -text “enter your desired text here”

To enable this configuration to continue after reboot simply add “xbindkeys” as a command in the session start up applications window (run “gnome-session-properties” for GNOME 3).

RSync:

This one is great for managing contents of directories over multiple machines.

RSync allows you sync files between devices over multiple platforms (Linux, OSx and Windows) via SSH so nice and secure. You can specify options to exclude particular files or file types, delete contents in the destination path that aren’t in the source path, skip files where a more recent version exists in the destination or to only update files that already exist in the destination path.

What makes this tool superior to FTP for this particular purpose is that it doesn’t transfer whole files and instead only transfers the chunks of the files that have changed, which needless to say lessens transfer time… in fact on top of that it offers compression of the data which again means that it transfers quicker.

These comes in particularly handy when wanting to keep system restore backups over multiple machines up to date as they can get sizable.

So when you’ve got your commands nicely configured for your purpose you can put them into a script and configure cron to automatically run it at your desired schedule. Cool huh?

Usage:

rsync -zva –exclude ‘FilesToExclude’ SourceUser@SourceIP:/path/to/source/ /path/to/destination/

This command enables archive, compression and verbose output as well as specifies file exclusion criteria.

You can read more about using RSync on this page.

PPA-Purge:

So… you broke everything yet again installing beta nvidia drivers.

PPA-Purge, will as the name suggests, purge all packages installed from a particular ppa and downgrade back to defaults.

Installation:

sudo apt-get install ppa-purge

Usage:

sudo ppa-purge ppa:xorg-edgers/ppa

Bye bye drivers that broke everything!


Disclaimer: Will not remove temptation of breaking everything again.

…pictures to come once I finish screwing my kernel with graphics drivers.

3 Lessons from installing kernel module drivers I learnt yesterday

Okay. Most of this stuff is just common sense, but common sense doesn’t usually occur until you learn the hard way by breaking things…

It’s like learning not to manually alter your xorg.conf file without backing it up first – the first time you restart the system to find that after GRUB loads the screen goes black is usually a wake up call, though personally I’ve made that mistake about 10 times at least since…

So yesterday I went to try fix my partner’s wireless issues on her laptop running Mint, with the ath9k driver resulting in slow speeds, frequent drop outs and problems reconnecting.

Figuring installing the backport-wireless drivers package would help the cause like it did on another computer running Ubuntu, I had a rude surprise after a restart: The resolution was screwed up, and not only did wireless card not even register but it lost ethernet as well.

I managed to modprobe the ath9k driver in order to remove the package that screwed everything up… it took 4 attempts because the damned thing kept disconnecting.

You would think that uninstalling the package, rmmod’ing the new not working wireless drivers and removing the backport kernel headers would fix it, but unfortunately not as this uninstallation process removed the original drivers.
So after another restart I had lost not only my ethernet NIC driver but also the ath9k driver as well leaving me connection-less.

Whether you’re wanting to switch to a daily build dkms driver for your mobo’s sound chipset to fix your crappy audio, or if you’re attempting to install those propriatetry nVidia drivers from the additional drivers section, heed these words:

Lesson 1. Never remove all previous kernel versions – you never know when one might save your butt:

This usually goes without saying, however recently when setting up new installations I’ve taken to using a post installation script I found and customized so I don’t have to sit there for an hour to upgrade and install\configure everything I need. This script finishes with an option to clean up unused package cache and also gives the option to remove previous linux kernel versions.

Thank heavens in this instance I did not run that option when I set up this installation, as I was able to load  the kernel from initial installation which had retained ethernet function as well as still having the ath9k driver… but other times in the past I wasn’t so lucky.

So I plugged it in,  updated repository data and ran apt-get dist-upgrade to install the newest kernel again. Upon restarting the resolution was fixed, I had ethernet function and the only step required to restore previous crappy wireless function was to type modprobe ath9k driver. So I was back to
step one.

Lesson 2. Explore other options to fix the problem BEFORE messing with your kernel:

 
Sounds logical enough, but sometimes when trolling through forums trying to find a fix that works for your system you’ll come across a blog that says ‘don’t bother with that crap – just install these drivers instead’.  When in the past you’ve tried everything else under the sun and finally installing different driver modules fixed it with no hassles it’s tempting to opt for that solution straight away when troubleshooting something else.

My advice is DON’T.

Why?
Because in this case I had to spend a bit of time getting the system back to square one, and once I got it back I managed to resolve the original problem using much simpler steps. It’s not the first time something like this has happened, so hopefully now I’ll be wiser and hopefully you’ll listen to my advice to save yourself the same suffering.

For those curious about the ath9k wireless specifics:

I ran lsmod and looked at the list, saw that asus_wmi was loaded, and then added it to modprobe.d/blacklist.

I then ran “rmmod ath9k” and “modprobe  ath9k nohwcrypt=1” instead.
After an hour it appeared to fix the problem so added “option ath9k nohwcrypt=1” to a file in modprobe.d and it was fixed.

Really should have remembered those steps from when I had to fix the same problem on the same computer over a year ago with Ubuntu installed… which brings me to the last point:

Lesson 3. Keep a log of how you fix your goddam problems so when you put a new installation on the same computer a year down the track you don’t waste time unnecessarily. 

Once again, common sense. A stitch it nine may cause 2 minutes of inconvenience, but for me as time goes on I keep regretting not writing my fix down for each device… I continue to spend hours solving something from the past that I was too stupid to record.

Another wonderful example of stupidity that could have been fixed if I was wiser:
Once I locked myself out of a privately hosted virtual webserver I built and a drupal webstore I was part way through making, ALL because I stopped for a few weeks and lost the piece of paper with the root password along with my MySQL admin password to boot.
The cloud hoster I was trialing had a really shitty proxy console window, which didn’t work 99% of the time and encouraged people to rely on ssh or RDP instead – neither of which were any use when I needed to boot a live CD and drop to a recovery shell, chroot the filesystem and change my password.

Attempting to rebuild a few months later I released I’d wasted so much time re-reading and implementing actions to configure everything the same way, all of which could have been sped up if I copied and pasted the process into my evernote as I went.

Now I try to keep some records or at least post my fix on the forums to hopefully help somebody else and also remind me how I solved it when I’m searching the forums about same problem in a year’s time. I have multiple devices running Linux distros and multiple VM’s with Linux installations as well, all of which have different configurations, different passwords unique to each instance. and require a driver fix to get hardware device working properly.

It’s easy to forget things if you don’t do them every day, so just be smarter than me and keep a log.

Nanotechnology and Blood Cells

For years and years whenever people spoke of the concept of immortality or longer life span, the first thought was always ‘what if you could replace your blood cells with nanobots?’ and used them to carry out the same functions i.e. gas transport and immune responses. There would be no more disease, because instead of a delay in lymphocytes recognizing a new strain of a viral pathogen before a full immune response fights the pathogen, nanobots could be programmed to recognize and immediately initiate an immune response to attack new strains of viruses and thus act more in a manner to when lymphocytes recognize a virus strain from previous infection.

What makes things like HIV difficult to cure is because it is a retrovirus. Normally cells produce proteins for biological function using processes called transcription and translation, whereby which DNA acts as the code for which RNA copies sections of in order to create amino acids. A virus is non living, and reproduces by inserting DNA into the host cells genome causing that cell to create more copies.

A retrovirus on the other hand is comprised of RNA which undergoes a process of reverse transcription  to create DNA that then is incorporated in the host cell’s genome to produce more copies of itself. Lacking the proof-reading process of DNA replication, the virus mutates rapidly during recombination and thus alludes the mechanism by which current anti-viral drugs operate as well as the ability to create a vaccine.

So what if you could program a nanobot to identify a part of the virus that will be subject to much less variability such as a surface protein and initiate an immune response?

Next comes the concept of the respirocyte, the nanobot that would act as an erythrocyte (red blood cells). This concept has been discussed as potentially regulating transportation of gases via a mechanical process rather than chemical, with over 200 times more efficiency than an erythrocyte. This in turn would supply tissues with oxygen for up to 4 hours without breathing, creating many potential medical applications. Somebody could go into cardiac arrest, get in the car and drive themselves to the hospital. People suffering respiratory disease or obstruction could be kept alive long enough to treat the issue, making death from asphyxiation a thing of the past.

 

Picture taken from:
www.azonano.com/images/Article_Images/ImageForArticle_3034(1).jpg

Above is a theoretical design of a respirocyte, around 1 um in diameter which is around 6 times smaller than a red blood cell and thus able to easily travel through constricted capillaries, where normally red blood cells would have reduced flow from their size.

The life-cycle of an erythrocyte is around 4 months whereas a respirocyte could potentially run indefinitely, using natural glucose supplies as fuel and regulating function from sensors on the surface monitoring gas levels and reporting to it’s nano-computer core.

All this sounds crazy but is well within grasp of our lifetime.

Further reading:

http://www.foresight.org/Nanomedicine/Respirocytes.html

http://boards.medscape.com/forums?128@1.exgHafJogoM@.2a381fc6!comment=1&cat=All

http://www.azonano.com/article.aspx?ArticleID=3034