quick and dirty: WiFi with EAP Authentication

This is a mini guide of quickly getting your Wireless Network running with what is commonly called “WPA Enterprise”.

For this “quick and dirty” Howto I’m using an clean installation of Ubuntu 14.04. The only difference between the distributions should only be the installation of the freeradius package.

Install FreeRadius Server

apt-get install freeradius

Next, we’re using the default configuration and adding the needed options to let our Access Point query the RADIUS-Server. All the configs we need to touch are in /etc/freeradius

Add the following to your clients.conf:

client <IP of Access Point> {
        secret         = <random secret>
        shortname      = wlanap

Next, add some users to the users-File

alex Cleartext-Password := whatever
otheruser Cleartext-Password := verysecret

Now restart your RADIUS-Server

service freeradius restart

Configure EAP on your Access Point

eap-apNext step: You need to tell your Access Point to use WPA Enterprise and tell it about the RADIUS-Server. This differs greatly from brand to brand but mostly you can find it where the Security related WiFi options are. On my Netgear WNR2000v4, it is under ADVANCED -> Setup -> Wireless Setup. Just enter the IP of your RADIUS (Port is 1812 if you need it) and the shared secret we set in the clients.conf (see screenshot).

Log in to your Access Point

When you look for Access Points around you, you should now see Access Point, and it should tell you that it is secured with 802.1x.

When trying to log in, it should as you for credentials and certificates (certificates are not used with our configuration), so just go ahead and type in the username and password you set in your users file.

That’s all. Your device should tell you that it is connected to your Access Point.

You can always add more users in your freeradius’ users file.

Not working?

If it is not working, try stopping the RADIUS-Server and starting it on a console with freeradius -X to start a debug session and check if

  • Auth Requests are arriving from your Access Points
  • Your Access Point is allowed to query (wrong client IP or secret in clients.conf?)
  • Double check for typos in your username and password

asterisk: pushing CDRs into elasticsearch using logstash

As the combination of logstash, elasticsearch and kibana is pretty powerful I decided to put my CDRs into an elasticsearch database aswell. With Kibana, you can then pull and kind of statistics out of your CDRs, like seeing who answered most calls from a queue, or how many calls you had in total over which period of time.

I just use the default Master.csv asterisk creates and let logstash parse it and push it into elasticsearch.

input {
    # read default Master.csv
    file {
        path => "/var/log/asterisk/cdr-csv/Master.csv"
        start_position => "beginning"
        sincedb_path => "/var/lib/logstash/sincedb-cdr"
        type => "cdr"
filter {
    # use the cdr filter and give all columns proper names
    csv {
        columns => [
            "src", "dst", "dcontext",
            "clid", "channel", "dstchannel",
            "lastapp", "lastdata",
            "start", "answer", "end", "duration", "billsec",
            "disposition", "amaflags", "uniqueid", "userfield"

    # if dstchannel is present, split tech, name and id into 3 separate fields
    if [dstchannel] == "" {
        mutate { remove_field => ["dstchannel"] }
    } else {
        grok {
            match => ["dstchannel", "%{DATA:dstchannel_tech}/%{DATA:dstchannel_name}-%{BASE16NUM:dstchannel_id}"]

    # do the same to channel
    grok {
        match => ["channel", "%{DATA:channel_tech}/%{DATA:channel_name}-%{BASE16NUM:channel_id}"]

    # duration and billsec are integers, message is not needed anymore (just plain CSV anyway)
    mutate {
        convert => [
            "duration", "integer",
            "billsec", "integer"
        remove_field => ["message"]

    # convert all the date columns to real dates
    date {
        match => ["start", "yyyy-MM-dd HH:mm:ss"]
    date {
        match => ["start", "yyyy-MM-dd HH:mm:ss"]
        target => "start"
    date {
        match => ["answer", "yyyy-MM-dd HH:mm:ss"]
        target => "answer"
    date {
        match => ["end", "yyyy-MM-dd HH:mm:ss"]
        target => "end"

# and finally output to elasticsearch
output {
    #stdout { debug => true }
    elasticsearch_http {
        host => "localhost"

That’s all for that. Your CDRs should appear in elasticsearch as soon as asterisk has written them to the Master.csv.

asterisk: cdr_adaptive_odbc with MySQL

As the asterisk module cdr_addon_mysql for writing CDRs directly to mysql has been deprecated for quite a while now, here is a short guide on how to replace it with cdr_adaptive_odbc. As I’m running my asterisk on Ubuntu, this guide is for Ubuntu aswell. It should be pretty similar for other distibutions.

First, we need to install the MySQL ODBC-Driver. This will install any missing dependencies (like unixodbc) aswell:

apt-get install libmyodbc

After that, we need to tell unixodbc to load the MySQL-Drivers. This is done in /etc/odbcinst.ini:

Description     = MySQL driver
Driver          = /usr/lib/x86_64-linux-gnu/odbc/libmyodbc.so
Setup           = /usr/lib/x86_64-linux-gnu/odbc/libodbcmyS.so

Next, we need to crate a DSN, to tell unixodbc where to connect to and what driver to use. This is done in /etc/odbc.ini:

Description         = Asterisk CDR
Driver              = MySQL
Database            = asterisk
Socket              = /var/run/mysqld/mysqld.sock

Half of the job is done already. In the last 2 steps, we tell asterisk to use the asterisk-cdr DSN we defined one step earlier.
Edit the file /etc/asterisk/res_odbc.conf and add:

enabled => yes
; DSN to use (see odbc.ini)
dsn => asterisk-cdr
; Username and Password
username => cdr
password => password
; immediately connect
pre-connect => yes

In the last step, we tell cdr_adaptive_odbc to use the Database we just defined as [cdr]. Edit /etc/asterisk/cdr_adaptive_odbc.conf and add:

; Database connection (see res_odbc.conf)
; Database table

And thats all. You should be able to continue your existing cdr table you used with cdr_addon_mysql without problems.

Exclude single files from auth with .htaccess

Imagine, on your Webserver you have an .htaccess-File for an URL like /administrator/ with the following contents:

AuthName private
AuthType Basic
AuthUserFile /etc/apache/htpasswd.users
require valid-user

This is quite common and nothing special so far.

But now you want to exclude, say, a single file from authentication, because it needs to be accessible for everyone. To do this, just add the following to your .htaccess aswell:

<Files notify.php>
order allow,deny
allow from all
satisfy any

The important part here is not the “allow from all”, but rather the “satisfy any”. The default of “satisfy all” means, that both “allow/deny”-Directives for Host-Based authentication and “require”-Directives for user authentication have to be met. With “satisfy any”, you say that fulfilling any of the directives is enough to be authorized. And the combination of “satisfy any” and “allow from all” always grants access.

More on the satisfy-Directive can be found in the official apache documentation at http://httpd.apache.org/docs/current/mod/core.html#satisfy

multipathing on IBM BladeCenter with IBM DS3200

I recently had to install Ubuntu (10.04 LTS to be precise) on a HS21 Blade installed in an IBM BladeCenter E and attached to an IBM DS3200 Storage using SAS. The storage was attached to the server using 2 SAS-Connections, so using multipath to protect against hardware failure was an good thing to implement. But unfortunately using multipathing on Linux is not quite straight forward and the list of easy-to-follow howtos is sparse (a good example i mostly used is found here). Another problem was, that the system has no local storage and so had to boot from the SAN aswell. So here is how i created my configuration and made the System boot and work.

Continue reading ‘multipathing on IBM BladeCenter with IBM DS3200’ »


As an modern alternative to the often used top, I discovered htop:htop

htop is an ncurses-based terminalprogram that shows currently running programs – like top. But where top stop, htop has so much more to offer:

  • kill and renive of multiple process with just a view keys and without even thinking of PIDs
  • horizontal and vertical scrolling
  • Supports using a mouse
  • can show processes as a tree (i really find this useful)
  • hides kernelprocesses
  • and it’s colorful :)

The obligatory screenshot is on the left. All in all htop is a very featureful alternative to top and really worth a try, especially as the installation is just an “sudo apt-get install htop”.

tcpdump with rotating capture-files

Sometimes you have an hard-to-debug problem and you are using tcpdump to analyse the dumps later on with wireshark, after the Problem happened again. But having tcpdump running for some time, the capture files can easily grow to several 100 MiB, which are not really practical to open and handle with Wireshark.

To keep file sizes reasonable, tcpdump offers a couple handy options that can help.

rotating with timestamps

Rotating capture files with a timestamp is a very simple and convenient solution. Using the -G option, you can specify after how many seconds tcpdump should open a new capture file.
With -G present, the -w option now accepts strftime-placeholders (like %H for hour, %M for minute and so on) so you can name the file like the current date.


[email protected]:~# tcpdump -pni eth0 -s65535 -G 3600 -w 'trace_%Y-%m-%d_%H:%M:%S.pcap'

tcpdump would expand the stftime placeholders which could result in a filename like trace_2010-08-30_13:04:55.pcap. All the available Placeholders are documented in strftime(3).

The downside is that this will run until stopped or until you run out of disk space.

One option would be to limit the number of files tcpdump creates. This can be done using the -W command line option


[email protected]:~# tcpdump -pni eth0 -s65535 -G 3600 -w 'trace_%Y-%m-%d_%H:%M:%S.pcap' -W 5

This works just like the example above, with the difference that tcpdump will stop capturing after writing the 5th file – effectively capturing for 5 * 3600 seconds = 5 hours.

Another method would be a cronjob to delete old captures

rotating by size

As an alternative to “rotate after x seconds”, tcpdump offers an alternative rotation to rotate the file after it has grown to a defined limit. This is done by specifying -C with the filesize in
Megabytes on the command line. The file in -w will have a number appended to it, starting at 1 and counting upwards. strftime-placeholders are not supported.


[email protected]:~# tcpdump -pni eth0 -s65535 -C 100 -w capture

This would create a file named capure1. After 100MB of data, tcpdump would create a file named capture2. After another 100MB, it creates the file capture3 and so on.

This will aswell run until stopped or disk full.

To counter the “disk full”, the -W option behaves different with -C. Instead of exitting, tcpdump “rotates” back to the file capture1, effectively overwriting it. With this, you have something
like a ring buffer – never using more than a predefined amount of disk space


[email protected]:~# tcpdump -pni eth0 -s65535 -C 100 -W 10 -w capture

In this example, tcpdump starts capturing into capture1 until it reaches capture10. When it filled up capture10 with 100MB of data, it starts again, overwriting capture1. This way, your captures
will never use more then 1000MB of disk space.

postrotating capture files

One other thing to help preserve disk space could be to compress the capture files after tcpdump finished writing to them. For this, tcpdump has a built-in “postrotate” command option: -z <command>

When tcpdump closes a capture file, it executes the command specified with the just closed capture file as the first and only argument.


[email protected]:~# tcpdump -pni eth0 -s65535 -G 3600 -w 'trace_%Y-%m-%d_%H:%M:%S.pcap' -z gzip

This works like the first example, but after closing the file uses gzip to compress the file. Be aware that you cannot specify any other arguments to the file (-z “gzip -9″ will not work), so if
you need additional options, you have to create a wrapper script and use that instead.

If running your postrotate does not work and results in an Permission denied-Error like

[email protected]:~# tcpdump pni eth0 -s65535 -G 3600 -w 'trace_%Y-%m-%d_%H:%M:%S.pcap' -z gzip
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes
compress_savefile:execlp(echo, trace_2010-08-30_13:04:55.pcap): Permission denied
[email protected]:~# dmesg | tail -n1
[34471806.841102] type=1503 audit(1395226938.909:41): operation="exec" pid=10460 parent=10447 profile="/usr/sbin/tcpdump" requested_mask="x::" denied_mask="x::" fsuid=0 ouid=0 name="/bin/gzip"

Then apparmor is denying tcpdump to run any other command. You can allow tcpdump to run gzip by adding it to /etc/apparmor.d/usr.sbin.tcpdump

# vim:syntax=apparmor
# Last Modified: Wed Feb 3 07:58:30 2009
# Author: Jamie Strandboge <[email protected]>
#include <tunables/global>

/usr/sbin/tcpdump {
 /bin/gzip Uxr,

after reloading apparmor profiles, your tcpdump should be able to produce gzipped files.

snmpd syslog spam

The snmpd on Debian based systems logs all messages to syslog into /var/log/daemon.log. That is a good thing but with the default configuration, every connection is logged as well, which can cause lots of messages, especially when using snmp-based monitoring solutions like Nagios or Cacti.

Jul 11 16:24:56 www1 snmpd[2120]: Connection from UDP: []:56686->[]
Jul 11 16:25:27 www1 snmpd[2120]: last message repeated 965 times

Relief comes in the way of an command line option, to tell snmpd to only log messages above a certain severity. On Debian/Ubuntu, those options are found in /etc/default/snmpd. The defaults at this moment are as follows

SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -g snmp -I -smux -p /var/run/snmpd.pid'

For us, only the Option “-Lsd” is of importance. It tells snmpd that all logging(L) is done over syslog(s) using the daemon(d) facility. “-Lf /dev/null” aditionally deaktivates file based logging. To limit snmpd on what Messages should be logged, change the “-Ls” to -“LS”, which wants an additional parameter, namely what severity a message should have at least to get logged. No to log all Warnings (and worse), change the parameter to “-LS4d”. The compliete line of command line options could then look like

SNMPDOPTS='-LS4d -Lf /dev/null -u snmp -g snmp -I -smux -p /var/run/snmpd.pid'

After restarting snmpd, connections will no longer be logged. Just to be sure, i don’t recommend having the snmpd listen on an public-IP interface, as up to snmp v2, everything is just clear-text.

The complete documentation is to be found in the manpage snmpcmd(1) below “LOGGING OPTIONS”.

Storing PHP-Sessions in memcached

Sessions in PHP are nice stuff, but it can be problematic if your application is behind a load balancer. Then you either have to write your own session handler using the session_set_save_handler function, what could be somewhat time consuming, or use an already existing handler.

Memcached on the other hand has proven itself for caching data of all kind and it is offering itself for storing sessions into it. And best of it is that the PHP extensions “memcached” comes with a handler for storing sessions and that the installation on Debian/Ubuntu is almost trivial.

There is just one drawback you should keep in mind: You cannot just restart the memcached Server, because you would loose all stored sessions. If that is not too big of a problem, memcached should do the job fine.

On the Server that should run memcached and store the sessions, you have to install memcached first

apt-get install memcached

After installing, memcached will be started automatically, as well it will be started on every boot.

On the client, where the PHP-Scripts that use the sessions are, you just have to install the memcached-extension for PHP

apt-get install php5-memcached

Now you just have to edit the php.ini to tell PHP that it should use the “memcached”-Handler for sessions and where the memcached servers are. For that, edit the File /etc/php/apache2/php.ini.

session.save_handler = memcached
session.save_path = ""

Sessions are now stored on the memcached server specified with session.save_path. To increase performance even more, you can specify multiple memcached server as a comma separated list, like “,″. The memcached client will automatically distribute load across both servers.

Now just reload your apache server

apache2ctl graceful

Done – all sessions are now stored in memcached.

Internet via UMTS

As mentioned earlier, i needed Internet at an far, far away location in italy. But unfortunately, the local Telecom didn’t want to give us access – not cost-effective. And considering that it would have cost them 10,000€ or more, their point is justifiable. The next possibility would have been an WiFi access – but sadly there was a hotel blocking the line of sight to the next access point and there was no way around. But interestingly, the location seemd to have an really good quality UMTS signal (BlackBerry gave full 5 out of 5 Bars), which opened yet another door for me to use.

It is certain that the internetconnection should not been an “switftly attach USB stick on laptop”-type connection but more like an domestic DSL or Cable access. So a router with UMTS support was needed. But looking at commercial products, costs were around 1,000€ or even more – completely outside of my budget. And a simple desktop PC with Linux? Would work, but the relative high power usage, bulkiness and noise of such a system are rather unpracticable. But there was a rather simple solution for that: The ALIX-Board from PC-Engines – or more precisely the alix6b2.

The ALIX-Board is a fully-fledged x86-PC in a tiny formfactor (just about 15x15cm), based on AMD’s Geode LX CPU with 500MHZ and 256MB RAM on the board and provides enough power for a simple router. It might not have common Monitor connectivity, but a serial console is more than enough for Linux.

Other than that, some additional parts were needed aswell:

  • An CompactFlash card for the OS
  • An mini-PCIe UMTS-Card (the ALIX-Board supports USB Connectivity only!)
  • Pigtails and antenna cable
  • and of course an UMTS antenna

I used an 4GB SanDisk Extreme III as CF card – it offers good performance and is quite durable. Then an Sierra Wireless MC8790 for UMTS connectivity. This card is rather costly, but I didn’t want to connect an cheap USB-Stick or similar device to it.

Here is a picture of the fully build board:


That’s really compact, isn’t it? And 0dB noise – you merely hear a summing of the voltage converters when the box is under full load.

And when you want to compare this box to an commercial product, you have payed for the following so far:

ALIX 6b2 – 115€
Sierra Wireless MC8790 – 200€
Sandisk CF-Karte – 15€
Gehäuse, Pigtail – 15€

So that’s around 350€. If you really want to go Low-Budget, you can swap the UMTS card for an USB stick and take an alix2d3 – saving around 150€. But the USB sticks barely offer external antenna connections and linux compatibility isn’t certain aswell. The advantage is, that the alix 2d3 is even smaller – only 10x15cm!

The last thing needed was an adequate UMTS antenna, so that even with bad weather the signal would be bearable. After some searching I settled for an SLP14-MK2 from Thieking – it offers an 9dBi gain, has compact dimensions and for around 45€ wasn’t too costly aswell. If you need more, you can get bigger and more powerful antennas aswell, of course :)

With all the equipment together, i could start right away. As OS i used Vyatta (an Linux distrubution specifically developed for routers and firewalls) and after doing the Configuration (howto is work-in-progress) the box was operational.

The only thing missing now was the SIM card. I used a Vodafone (was the only provider with good signal) prepaid card for the box – UMTS flatrate for 5€/day was ok for the moment.

So that’s it  – internet far away from civilization! :)

And if you want to know, how to install and configure vyatta for this task, i’ll do an howto and put it online in the next days!