Using Savon with SellerCloud

At work, we’re moving to a new sales/inventory/fulfillment platform called SellerCloud. Their API, while open, is SOAP-based, which means I had to learn something new to make this all work.

XML is evil, but here’s the final bit to get it really going:

client = Savon.client(wsdl: "http://--.ws.sellercloud.com/scservice.asmx?WSDL", soap_header: {'AuthHeader' => {:@xmlns => "http://api.sellercloud.com/", 'UserName' => "user", 'Password' => "password", "ApplicationName" => "Test", "ApplicationVersion" => 1}}, log: true, env_namespace: :soap, pretty_print_xml: true)

Building a testing platform in Ruby/Sinatra

As I mentioned, we’re building a new testing system at my job. We’re both web people first, so for us, it makes sense to make something that can use web technologies to make the whole thing easier to work on.

We’re using Ruby, with the Sinatra gem (think a simplified Rails, and you’re close). The PXE image boots into the i3 window manager, and autostarts the testing script (which is plain Ruby outside the Sinatra bits).

The testing uses a few small utilities, like memtester, seeker ( a very small utility I found to test seeking), and Ruby to glue it all together.

Fun things I’ve learned:

1) Trying to parse output from a curses-based UI is not going to end well
1a) It’s OK to make the curses-based UI pop up on the screen as long as it’s for good reason
2) Ruby has really simple script forking, which means that I can the HDD and memory testing simultaneously
2a) Ruby has really simple script forking, which means, much to my chagrin, that no, splitting memory testing into four parts doesn’t make it go faster.
3) Ruby makes really nice glue for stuff like this
3a) I think I’m stuck, help.
4) Assuming that a device will pass a test, and only failing it if it doesn’t, is a perfectly viable strategy
4a) Don’t forget to actually mark it as failed.
5) Don’t overthink your plans — sometimes it’s best to figure out a basic plan and run with it.
5a) Don’t underthink your plans, either — that’ll come back to bite ya.

Building a modern system testing application

So, part of the project that led to Seymour (my drive geometry script) picked up yesterday — we are building a simplified system test suite that runs over a PXE image.

Right now, we have a Ruby script that uses the Sinatra gem to generate a web page that we can use to monitor the progress. It’s fairly simple so far – it displays the status of a memory testing tool, and we’re implementing hard drive testing soon, but it’s a good aesthetic, and it’s useful – plus it is way, way cheaper than our current solution.

I learned today how to do forking in Ruby. Beyond being really, really easy, it is really useful for something like this – we don’t want the application to be held up because we’re waiting for the memory test to complete.

To do the forking, I wrote the following:
memstatus = "Testing in Progress"
fork do
memstatus = system("sudo memtester #{memTestAmt.to_s} 1").passfail
exit
end

I’m using tempfiles to store the results, which is fortunately a nice clean package in Ruby.

So, we fork for the memory test and again for the hard drive test, then use the result to determine the content of the page. Seems pretty simple, I guess? It’s been interesting so far, even for a simple application that does so many hacky things.

Drive Geometry Specs in Clonezilla

Yesterday, I posted about automating drive geometry in Clonezilla.  I realized that the actual information is somewhat lacking, so here’s an explanation about how this all fits together.

sda-pt.sf is the only file that I’ve found to be strictly necessary to change drive geometry between images. sda-pt.sf contains the partition table for device sda, appropriately enough.

Here’s an example of sda-pt

label: dos
label-id: 0xf0a1cc41
device: /dev/sda
unit: sectors

/dev/sda1 : start= 2048, size= 204800, type=7, bootable
/dev/sda2 : start= 206848, size= 142272512, type=7
/dev/sda3 : start= 142479360, size= 13821952, type=7

The first four lines aren’t particularly noteworthy. It’s a DOS partition table, it came from /dev/sda (the first disk in a system, essentially), and the numbers used are specified in sectors of the disk (usually 512 bytes per sector).

What’s really interesting here is the last three lines. These are the actual geometry specs, and the .sf format specifies start and a size.

Before we begin digging into the process to resize, I want to point out one small factor that left me a little confused at first. Note how, for example, sda1 (first partition) has a size of 204800 (exactly 100MB, coincidentally), and starts at 2048, while sda2 starts as 206848. While this may seem intuitive to some, it had me a little confused at first, because I forgot that the start sector is included in the size.

Moving on, let’s get into the real fun – calculating the new partition tables from this. In this example, we want the second partition to grow, since this image is for a Windows 7 machine. The first partition is the system partition, third partition is recovery, so we don’t need to worry about those growing.
486 178 048
For this example, we’ll make the geometry spec for a 256GB drive, which we figure will have 500,000,000 sectors.

We start from the end and work our way back on this. sda3 is 13,821,952 sectors, so the start point should be (500,000,000 - 13,821,952) = 486,178,048

We can change the line for sda3 to the following: /dev/sda3 : start= 486178048, size= 13821952, type=7

Since sda2 is our primary partition, we now need to calculate its new size. This, fortunately, is a simple matter of subtracting the start of sda3 from the start of sda2. In this instance, 486178048 - 206848 = 485,971,200.

This means we can change the line for sda2 to the following: /dev/sda2 : start= 206848, size= 485971200, type=7

The first partition doesn’t change at all, so we now have a new geometry specification:

/dev/sda1 : start= 2048, size= 204800, type=7, bootable
/dev/sda2 : start= 206848, size= 485971200, type=7
/dev/sda3 : start= 486178048, size= 13821952, type=7

Please be sure to keep types the same, though – that will break things.

That’s it for today, though! Hope you got something useful out of this, and feel free to ask questions!

Creating multiple images from a single base

At work, we send out multiple PCs per day, imaged across a range of models.  Within each model, we have a range of HDD sizes.

I was asked to make a networked imaging server, and that’s been relatively easy (I’ll write on that later), using DRBL and Clonezilla.

We have a few issues within our use case, though – Clonezilla can technically handle making an image larger, but *only* proportionally.  This is an issue when you’re going from an 80GB image to a 1TB drive – “Why is the recovery partition 80GB?”

Of course, we could just make hundreds of images for every possible drive size, but that’s a lot of room, especially when the difference between drives is purely logical.

Instead, a bit of digging into how Clonezilla stores images and metadata led me to a great discovery.  In each image folder, there is a set of metadata files, specifically the geometry specification located in sda-pt.sf.

By modifying this file, we are able to change the drive geometry to our specifications.  This was the first step.

After that, it was a simple matter of creating hard links to each non-changing file for that model (the partclone images, specifically).  We now have a Ruby script that generates the new image folders, links the files to the base image, and copies over the files that will change between images.  After that, it’s a simple matter of generating the geometry (which the script handles in the most common case), and we can have any number of drive size images for the cost of the base image and a few KB per extra image.

 

XMLRPC Attacks

I learned a lovely thing yesterday – there’s such a thing as an XMLRPC attack on WordPress (which powers this site, as well as another I admin).

Essentially, this attack uses the XML-RPC interface present in WP to try to guess passwords, and there are variations of this attack that allow an attacker to guess multiple passwords in a single attempt, thanks to the way the XML-RPC interface is structured.

I only discovered this attack because, which was (and actually last I checked still was, but is now denying attackers) sending an average of 41 requests per second, which then hit the database, and it just became a massive backlog of data transactions needing to be processed.

My symptoms included constant database errors, other applications being slow/unusable, and incredibly long wait times for access.  In my logs, though, I found the real key to my issues – hundreds of POST requests for the xmlrpc.php file.  Googling this brought me to an explanation of the attack, and I was able to block access to the attackers (my apologies for any legitimate users in the Netherlands) in Apache’s configuration, leaving me more or less completely operational now.

So, good times yesterday, dealing with international attacks.

Automating sytem specs and inventory

I work for a company that refurbishes electronics.  We get plenty of servers in, and processing them can take a while, especially on the models that try to obfuscate their specs.

To solve this, I started making a boot drive that can handle all of it automatically – boot, specs, and printing, all in one.

From a previous project, I have a ZP450 printer that’s hooked up to a thin client running CUPS, and it prints on 4×6 labels.

I took a solid state drive and installed Antergos on it (any flavor will work, I just had an Antergos installer laying around for some reason), and installed only the base packages (which, admittedly, was still more than I wanted to have, so I did some paring down later).

Once it was installed, I wrote a simple script that gets server information to pass on to sales, and it then puts that information in a text file for me.  From a previous project, I’d learned to make my life easier by using serial numbers for the filename, so that was fairly straightforward now.  So now my biggest issue was how to print it remotely.

This was fairly easy, actually.  Using SSH keys, I can easily automate the login process, and by having the script run on boot and wait for network access to print, I can easily move the file using SCP (again, automated!) and tell the server to print it.

Overall, pretty simple 🙂

Raspberry Pi 3 Media Server, Part 3

Now that I’ve gone over the initial design, time to start digging into the meat and potatoes of this thing!

The Raspberry Pi 3 has a 1.2GHz quad-core ARM processor, and 1GB of RAM.  While it’s not going to win any awards for raw processing power, for a lot of applications, it’s sufficient, including the ones here.

In my original design, one RPi was going to run Plex Media Server, exclusively, while the other handled transcoding video.  Interesting, Plex does run natively on the RPi (or, specifically, on ARM).  The standard documentation, unfortunately, is out of date if you want to run it on a 3B.

Instead, follow the directions straight from the packager for ARM64

# become root
sudo su
# add my public key
wget -O - https://dev2day.de/pms/dev2day-pms.gpg.key | apt-key add -
# add my PMS repo
echo "deb [arch=armhf] https://dev2day.de/pms/ jessie main" >> /etc/apt/sources.list.d/pms.list
# activate https
apt-get install apt-transport-https
#enable armhf support
dpkg --add-architecture armhf 
# update the repos
apt-get update
# install PMS
apt-get install plexmediaserver-installer:armhf

Essentially, this makes sure that we get a version that takes full advantage of the hardware.  Once the server is installed, you can just start it up.

sudo systemctl enable plexmediaserver && sudo systemctl start plexmediaserver

Once the server is up and running, just visit the page in your browser

http://rpiaddress:32400/manage

Obviously, replace this with your Raspberry Pi’s IP address, but after that, it’s just a matter of configuring it!

Your Vim May Vary 1 – SCP Editing

Another thing I’ve been learning about lately is Vim.

In the 11 years I’ve been using Linux, I always stuck to simple text editors – nano was a good choice for me, since it’s pretty much entirely visual and the keybindings are printed at the bottom.  But lately, I’ve been doing more scripting, and I’ve needed more functionality from a text editor, so I started learning how to use Vim.  I’m still not particularly advanced in it, but it’s suiting my needs better.  One of my favorite recent discoveries is the ability to edit remote files using a local Vim – I’ve put a small amount of work into my vim configuration, so I’d like to see my work not go away every time I need to edit a remote file.

It’s a pretty simple thing to access, actually.

vim scp://user@host/relative/path/

vim scp://user@host//absolute/path

The user can be omitted if it’s the same name as your current user, and the relative path is relative to the login directory, usually ~ .  I’d recommend setting up SSH keys, just to make it more transparent (although there are plenty of other reasons to set up keys over password authentication).  From there, Vim will treat it almost exactly like a local file, until you save; the buffer is automatically sent to the remote location when you write it.  Pretty useful if you’re constantly working on remote machines!

Transmission Remote Client

If you use a lot of torrents, you eventually realize that shutting off your computer at night means your computer is no longer downloading.  It’s sometimes frustrating; we don’t wanna leave our stuff on all the time… right?

A lot of low-power devices support torrent clients.  The Raspberry Pi, since it runs Linux, supports a handful, but my first experience with a remote BitTorrent client was actually in the form of my router.

My router is an Asus RT-AC68R (the link is to the AC68U which is functionally identical to the older R model).  While it’s running on alternate firmware, the core functionality is actually present in the basic version.  If you install the “DownloadMaster” software package, you can use the router to download torrents to a connected USB device.

The web interface for DownloadMaster kinda sucks, to be frank.  It presents very few options, it’s slow, it freaks out easily… and it’s a pain to add torrents when you’re used to the convenience of a desktop-based client.

Fortunately, we have a really easy workaround!  The DownloadMaster application is actually a package, and the BitTorrent client used in it is actually a daemonized version of the open-source Transmission client.

Once we know this, it’s really easy to get the best of both worlds – easy configuration, monitoring, and adding of torrents, while running on a lower power device.

Transmission has a great remote client.  If your router, or RPi, or any other device, is running the transmission daemon, chances are, you can use the remote client to make managing it a lot easier.

For this, we’ll be discussing the transmission-remote-gtk client, which is available for Windows and Linux (and probably Mac, but I’ve honestly never tried it).

Check out the Windows client or install it on your Linux distribution:

sudo apt-get install transmission-remote-gtk

on Debian based systems (Ubuntu, Linux Mint).  The package name is fairly constant between distributions, so use your distribution’s package management software to install it.

Once it’s installed, open it up, and you’ll see what looks like a fairly basic torrent client.  For the most part, it is – Transmission never has tried to be the most feature packed client, but it does what it does well.

In the remote client, click the “Connect” button in the upper left, and fill in the details for your device (IP address, user name, password, etc) – if you’re using the DownloadMaster setup from Asus, you can use your router username and password, and the default ports.  It’s pretty straightforward.  Save your settings and click connect again, and you’ll see your list of torrents in the main frame of the application.

From here on out, it functions just like a regular torrent client; you can even click links to add them (assuming your system is set up to send the link to the remote client; on Windows, make sure the association is set and you’re good to go.)

The remote client lets you specify a fair bit more, such as download locations, files to download – essentially, the things you expect to see in a modern client.

Overall, I’d strongly recommend using a setup like this – it’s simple to do, offers a lot of flexibility, and is just plain interesting.

Not a fan of Transmission for some reason?  Other clients, such as Deluge, also support this sort of remote access!