Turns out that calling ifup
[root@host network-scripts]# cat route-eth3
192.168.101.0/24 via 192.168.100.0
Blog Archive
Monday, September 13, 2010
routes on linux
Saturday, September 11, 2010
kerberos
Reading documentation on kerberos for opensolaris.
http://dlc.sun.com/osol/docs/content/2009.06/IMGPACKAGESYS/gentextid-410.html
Seems to mention commands that don't exist on my system. After searching the webs, I realized that I have a package called SUNWkrb but not SUNWkdc. However, attemps to install SUNWkdc would report:
pkg: No matching version of SUNWkdc can be installed:
pkg://opensolaris.org/SUNWkdc@0.5.11,5.11-0.132:20100130T092746Z: This version is excluded by installed incorporation pkg://opensolaris.org/consolidation/osnet/osnet-incorporation@0.5.11,5.11-0.134:20100302T010625Z
Finally solved as follows:
# dev is not release. for better or worse, I have a dev image...
650 pkg search -s http://pkg.opensolaris.org/dev kdc | grep 134
652 pkg install pkg:/system/security/kerberos-5@0.5.11-0.134
missing man pages in opensolaris
Installed CollabNet subversion, which modified /etc/profile to include
export MANPATH=/opt/CollabNet_Subversion/share/man:$MANPATH
OpenSolaris's man is smart; it finds manpages all by itself without relying on $MANPATH. Presence of MANPATH turns off this default behavior so you get craziness:
root@ns:/etc/# man ls
No manual entry for ls.
Solution: comment out the line specified above.
Sunday, March 08, 2009
imperative make
The unix utility make wants you to declaratively build a dependency tree, so that subsequently each node of this tree can be created using known rules.
This view always went against my intuition. I don't view my program as a file that depends on its output directory and on a bunch of .o files, which depend on .cpp files, which depend on headers. Instead, I view it as a bunch of source files which have to be compiled and linked into the the program, and the fact that sometimes some of them don't have to be recompiled is an optimization the compiler should be concerned about. In other words, I view making a program as an imperative task:
compile all sources into .obj files
link .obj files into the output file
So now that I'm porting my stuff to FreeBSD, I decided to re-create the imperative make I've been using under Windows. Instead of porting my selectivebuild.exe, I decided to use perl.
Final makefile:
.ifndef DEBUG
OUTDIR = Release
GCC_OPTS += -O3
.else
OUTDIR = Debug
GCC_OPTS += -g
.endif
GCC_OPTS += -Wno-multichar
INCL = -I/usr/include/c++/4.2 -I.. -I.
SOURCES = byte_io.cpp mt_alloc.cpp cstring.cpp ... lots more cpp files
default:
@../compile_sources.pl $(OUTDIR) $(SOURCES) $(INCL) -O"$(GCC_OPTS)"
@../link_sources.pl $(OUTDIR) core.lib $(SOURCES)
clean:
rm -r Debug Release
This can be invoked with "make" or "make DEBUG=". The obj files and the resulting .lib file go into the output directory, which is either Debug or Release.
So... how do we implement compile_sources.pl and link_sources.pl? Well, the concept is simple. For compile_sources, I run makedepend with option -f- (output to stdout), and parse the resulting dependencies. I filter down the original array of sources to a ilst of out-of-date sources. I then iterate over that array and call gcc with options passed via -O.
For link_sources, I compare the modification dates of .obj files to the output file. If one of the objects is newer than the output file, I link. Based on the extension of the output file, I either call "ar" (for .lib) or "gcc" (otherwise).
A function objfile_of($outdir,$source) is used to map source files to obj files.
First, the utility file, filtercpp.pl:
#!/usr/bin/perl
use File::Basename;
# extract @includes
use Getopt::Long;
Getopt::Long::Configure ('bundling');
Getopt::Long::Configure ('no_ignore_case');
@includes;
@compile_opts;
GetOptions("I=s" => \@includes,
"O=s" => \$compile_opts);
@includes = map("-I$_", @includes);
sub objfile_of {
my ($out_dir,$a) = @_;
$a || return "";
my ($base, $dir, $ext) = fileparse($a,'\..*');
return "${dir}$out_dir/${base}.o";
}
sub do_exec {
print "@_\n";
`@_`;
}
1;
Now compile_sources.pl:
#!/usr/bin/perl
## sets variables @includes, $compile_flags
require "../filtercpp.pl";
$out_dir = shift(@ARGV);
$out_dir || die;
@sources = @ARGV;
sub get_ood {
my ($out_dir, $sources) = @_;
my %ood = ();
open DEPENDS, "makedepend @{$sources} @includes -f- |" || return [];
while () {
chomp;
if ($_ !~ m/DO NOT DELETE/) {
my @vars = split(/:/,$_);
my $source = $vars[0];
$source =~ s/\.o$/.cpp/;
$source ne "" || next;
next if $ood{$source};
my $obj = &objfile_of($out_dir,$source);
my $obj_date = ((stat $obj)[9] || 0);
#print STDERR "examining $obj, modified on $obj_date, rest is $vars[1]\n";
for my $header (split('\s+',$vars[1])) {
my $header_date = (stat $header)[9] || 0;
if ($header_date > $obj_date) {
print STDERR "$header ($header_date) is newer than $obj ($obj_date)\n";
$ood{$source} = true;
last;
}
}
if (((stat $source)[9] || 0) > $obj_date) {
$ood{$source} = true;
}
}
}
close DEPENDS;
return keys(%ood);
}
sub compile_sources {
my ($out_dir,$sources) = @_;
my @objs = map(&objfile_of($out_dir,$_), @{$sources});
#print("outdir $out_dir, includes @includes, sources @{$sources}\n");
@ood_sources = get_ood($out_dir, $sources);
mkdir $out_dir;
if (@ood_sources) {
print "ood sources: @ood_sources\n";
for $source (@ood_sources) {
#print("compiling $source\n");
my $obj = &objfile_of($out_dir,$source);
#print("ood obj is $obj for source $source\n");
&do_exec("gcc $source @includes $compile_flags -c -o$obj");
}
}
}
&compile_sources($out_dir,\@sources);
Now link_sources.pl:
#!/usr/bin/perl
## sets global variables @includes, $compile_opts
require "../filtercpp.pl";
$out_dir = shift(@ARGV);
$out_dir || die;
$out = shift(@ARGV);
$out || die;
@sources = @ARGV;
sub link_sources {
my ($out_dir,$sources,$out) = @_;
my $out_date = (stat "$out_dir/$out")[9] || 0;
my @objs = map (&objfile_of($out_dir,$_), @{$sources});
for my $obj (@objs) {
my $obj_date = (stat $obj)[9] || 0;
if ($obj_date > $out_date) {
print("$obj is newer than $out\n");
if ($out =~ /.lib$/) {
&do_exec("ar cr $out_dir/core.lib @objs $compile_opts");
last;
} else {
&do_exec("gcc @objs -o $out_dir/$out $compile_opts");
last;
}
}
}
}
&link_sources($out_dir,\@ARGV,$out);
Monday, March 02, 2009
SSD -- the future is here
There was a time, long long ago, when simple-minded apes didn't know much about storing information. The best they could do was take magnetized platters, stack them up, spin them around an axis and pick up or deposit charges by moving an actuator arm precariously close to the delicate spinning surface of these platters. There is no good reason for storing information like that. But you see, the people who invented the spinning platter lived on a relatively recently colonized continent. The continent was originally inhabited by primates who were, it is agreed, every bit as intelligent as the newcomers who succeeded them, with one major difference: they had not invented the wheel. Because they had not discovered the Great Benefits Of Rotation, their technological development was stunted, and as a result they simply could not compete. It was thus obvious to everyone, given this historical precedent, that Wheel Is Good.
But all technological cycles come to an end. And last weekend, finally, I took off the feathers, laid down the quiver, and picked up night goggles and a bazooka. I mean, I upgraded my home computer to an SSD drive. It was just a little bit painful, not too much. First, I tried all the cloning utilities I had used previously, including the ultimate boot CD and seagate's Disk Wizard, but they all failed to recognize the SSD as a valid disk. I don't know why, but I didn't care to figure out, because the closer you are to hardware, the more dumb things become. Thus, when dealing with hardware directly, you are dealing with the dumbest software, and the dumbest code. So it wasn't surprising to me at all, that even though the new disk (OCZ 120GB, $284) was SATA-compliant, it was perceived differently from other disks. Perhaps the cloning software was stunned to discover that the new disk had no tracks or cylinders. It's just a guess. I ended up using XXClone, which has a convenient 30-day trial mode. Good thing cloning a hard drive takes less than 30 days. XXclone is fundamentally different from other disk cloners in that it copies one file at a time, using Windows' Volume Snapshot Service. After this is done, you press a button in XXClone that makes the new disk bootable, and you're done. There was only one problem during this process. It had to do with junction points. You see, I use two terabyte-sized RAID disks for storing data. Since I hate drive letters, I access these disks using NTFS hard links (see a tool called junction by Mark Russinovich). c:\work leads to one of these 1TB disks, and c:\photo to another. XXClone, after checking that the amount of space taken up by C: was less than the capacity of the new drive, proceeded to naively recurse into these monstrously large directories and attempt to stuff them into the new disk. Fail. Using "junction *" I found all the reparse points on c: (all my reparse points are top-level), and removed them with junction -d. XXClone was even kind enough to swap disk letters, so that when I booted from the new drive, it was C:, and not G: or something. For this, I am grateful. Windows stores a mapping of volume serial numbers to drive letters somewhere in registry; this prevents drive letters from randomly reassigning themselves at boot time. But it also prevents disc cloning operations from working smoothly.
So, just like the wheel was a killer app five hundred years ago, solid state is the killer app today.
Thursday, February 26, 2009
Multicast trivia
In multicast IP addressses, the low 28 bits are varied (the high four are a 1110). But in ethernet, only 23 bits are avaiable to multicast. So level 2 devices throw out the high 5 bits of the lower 28 of a multicast address, which obviously causes over-subscription on LANs. If you subscribe to 224.1.1.1, you'll get 224.129.1.1, 225.1.1.1, through 239.129.1.1.
Therefore! Kids, always vary the low byte of a multicast address.
Saturday, February 14, 2009
-lkse or -lthr?
A token-passing test using two threads. The threads use mutexes and condition variables to signal each other.
Machine: 3ghz quad core, running FreeBSD 7.1; not like the number of cores matters since only once process is running at a time
Using -lkse, a roundtrip between two threads is 14us.
Using -lthr, it's 1.2us.
-lkse is the N:M threading library (N user threads map onto M kernel threads).
-lthr -s the 1:1 threading library (1 kernel thread per 1 user thread)
On 2.3ghz quad core running Windows XP 64: 14usec / roundtrip.
Draw your conclusions. Source code below
// general utilities
#include <iostream>
#include <cstdio> // cerr,cin
#include <string>
#include <sys/time.h>
#define prlog(xxx) cout << xxx << "\n"
using namespace std;
#define frep_(i, max) for (int i = 0, i##lim=(max); i < i##lim; i++) // forward
#define frep1_(i, max) for (int i = 1, i##lim=(max); i < i##lim; i++)
#define rrep_(i, max) for (int i = (max); i-->0;)
#define rrep1_(i, max) for (int i = (max); --i>0;)
inline void PrintPerfSample(const char* s, double d) {
double secs_per_call=1/d;
if (secs_per_call < 0.000001) {
cout<<s<<": "<<d<<" samples per sec (";
cout<<secs_per_call*1000000000.0 <<" nanosec per sample";
} else if (secs_per_call < 0.001) {
cout<<s<<": "<<d<<" samples per sec (";
cout<<secs_per_call*1000000.0 <<" microsec per sample";
} else if (secs_per_call < 1) {
cout<<s<<": "<<d<<" samples per sec (";
cout<<secs_per_call*1000 <<" msec per sample";
} else {
cout<<s<<": "<<d<<" samples per sec (";
cout<<secs_per_call<<" secs per sample";
}
cout<<")\n";
}
#define DO_PERF_TEST(name,action) \
{\
double base_time = GetAbsTimer();\
frep_(loop_index,10) {\
action;\
};\
int loop_quant = 10/(GetAbsTimer()-base_time);\
if (loop_quant<1000) loop_quant=1000;\
if (loop_quant>100000) loop_quant=100000;\
base_time = GetAbsTimer();\
double timer_sample_overhead = GetAbsTimer()-base_time;\
int timer_sampled=2;\
int number_of_loops=0;\
do {\
frep(loop_index,loop_quant) {\
action;\
}\
number_of_loops++;\
timer_sampled++;\
} while ((GetAbsTimer()-base_time) < 2);\
PrintPerfSample(name,double(number_of_loops)*double(loop_quant)/(GetAbsTimer()-base_time-timer_sample_overhead*(timer_sampled+1)));\
}
double GetAbsTimer() {
struct timespec ts;
int result=clock_gettime(CLOCK_REALTIME,&ts);
if (result!=0) exit(1);
return ts.tv_sec + ts.tv_nsec*1e-9;
}
// threading utilities
#include <pthread.h>
struct SyncEvent {
pthread_mutex_t mutex;
pthread_cond_t cond;
bool triggered;
SyncEvent() {
pthread_mutex_init(&mutex, 0);
pthread_cond_init(&cond, 0);
triggered = false;
}
~SyncEvent() {
pthread_mutex_destroy(&mutex);
pthread_cond_destroy(&cond);
}
void Set() {
pthread_mutex_lock(&mutex);
triggered=true;
pthread_cond_signal(&cond);
pthread_mutex_unlock(&mutex);
}
void Reset() {
pthread_mutex_lock(&mutex);
triggered=true;
pthread_mutex_unlock(&mutex);
}
void Wait() {
pthread_mutex_lock(&mutex);
while (!triggered) {
pthread_cond_wait(&cond,&mutex);
}
pthread_mutex_unlock(&mutex);
}
bool Test() {
return triggered;
}
};
struct WorkThread {
pthread_t thread;
virtual void DoWork()=0;
WorkThread() {
thread=0;
}
virtual ~WorkThread() {
}
static void* ThreadFunc(void *arg) {
((WorkThread*)arg)->DoWork();
delete (WorkThread*)arg;
return NULL;
}
void Start() {// will delete itself after completion
if (pthread_create(&thread,NULL,&ThreadFunc,(void*)this)!=0) {
prlog("thread creation failed");
}
}
};
// actual code
struct FakePipe {
SyncEvent has_data;
FakePipe() {
}
void Read() {
has_data.Wait();
has_data.Reset();
}
void Write() {
has_data.Set();
}
};
struct Thread1 :WorkThread {
FakePipe &_1,&_2,&_3,&_exit;
Thread1(FakePipe &p1,FakePipe &p2, FakePipe &p3, FakePipe &exit):_1(p1),_2(p2),_3(p3), _exit(exit) {
Start();
}
void DoWork() {
_1.Write();
_2.Read();
DO_PERF_TEST("Messaging",
{
_1.Write();
_2.Read();
}
);
_3.Write();
_1.Write();
_exit.Write();
}
};
struct Thread2 :WorkThread {
FakePipe &_1,&_2,&_3,&_exit;
Thread2(FakePipe &p1,FakePipe &p2, FakePipe &p3, FakePipe &exit):_1(p1),_2(p2),_3(p3), _exit(exit) {
Start();
}
void DoWork() {
while (!_3.has_data.Test()) {
_1.Read();
_2.Write();
}
_exit.Write();
}
};
int main() {
FakePipe _1,_2,_3,_exit1,_exit2;
new Thread2(_1,_2,_3,_exit1);
new Thread1(_1,_2,_3,_exit2);
_exit1.Read();
_exit2.Read();
}
Saturday, November 15, 2008
Network stack overhead (tcp/udp delays)
server: echoes data back
client: sends a packet with a timestamp to the server, reads the data back, notes the difference.
freebsd=2x opteron 285 (2.6ghz)
ubuntu=1 pentium 4 @ 3ghz
linux2 = quad core 3.2ghz xeon
windows=2xs xeon 9775 (3.2ghz) (has GhostWall and possibly other running programas)
windows2=2x xeon 5472 (3ghz) (no firewall, a cleaner machine)
TCP
All numbers in microseconds (min/max/avg)
windows->localhost: 43 / 244 / 68
windows2->localhost: 33 / 72 / 48
freebsd->localhost: 22 / 35 / 27
ubuntu->localhost: 31 / 106 / 44
linux2->localhost: 34 / 77 / 35
(max is all over the place from 69us to 94us)
freebsd->ubuntu: min 64 / 128 92
notes: windows code is asynchronous. unix code is blocking (does one read/one write).
UDP:
Windows->localhost: 33 / 332 / 52
Windows2->localhost: 19 / 75 / 32
FreeBsd->localhost: 16 / 18 / 17
Ubuntu->localhost: 24 / 126 / 32
linux2->localhost: 28 / 70 / 29
Conclusions:
1. FreeBsd is the shizzle
2. Linux is not.
3. Windows has the largest variance of all (*what* is there to do for 100 microseconds??)
Wednesday, September 10, 2008
Clipboard utilities
toclip.exe (download) -> copies standard input to the clipboard
saveclip.exe (download) -> saves any bitmap in the clipboard to a file
More info here: www.alexeilebedev.com/clip
Monday, June 23, 2008
Measuring context switching speed of different OSs
OpenBSD 4.3
#include <sys/time.h>
double GetAbsTimer() {
struct timespec ts;
int result=clock_gettime(CLOCK_REALTIME,&ts);
if (result!=0) exit(1);
return ts.tv_sec + ts.tv_nsec*1e-9;
}
The time precision is about 1.3 microseconds.
Fedora 9:
The docs say you can use CLOCK_REAL_TIME_HR after including
The time step is also 1.3 microseconds.
Token passing test: parent process opens a pipe, forks a child. In a loop, write a byte to the pipe. The child reads the byte and writes it back. Repeat a few hundred thousand times. Results:
Windows 64
Note that Windows pipes don't get anywhere close to that number.
QueueUserAPC is the method of choice on Windows, and we're testing switching between two threads in the same process space (which is a valid test, because thread is how you parallelize stuff on Windows, not processes).
OpenBSD 4.3
ThinkPad T60 (2.3mhz Core duo): 130K roundtrips/sec (56.5K switces/mhz)
Linux 2.6.23
ThinkPad T60 (2.3mhz Core duo): 22K roundtrips/sec (9.5 K switches/mhz). *HORRIBLE*
That's right, 1/25th of the speed. Don't let anyone tell you virtualization is efficient. It's cool, but it's not efficient.
AMD Athlon 64 3200+ (2ghz), 2.6.23.8-34 kernel: 171K switches/sec (85K swiches/mhz)
Notes: cat /proc/cpuinfo, uname -a
Some conclusions:
1. Linux sucks at context switching on Intel chips
2. Linux rules at context switching on AMD chips, especially Opteron
3. OpenBSD is faster than Linux on intel chips
VirtualBox guest additions on Fedora, Ubuntu
Fedora:
yum -y install binutils gcc make kernel-headers kernel-devel
Ubuntu:
sudo apt-get install build-essential
sudo apt-get install linux-headers-`uname -r`
I haven't run OpenBSD under VirtualBox, but on that OS the package installer is pkg_add. As in,
pkg_add -v ftp://ftp.openbsd.org/pub/OpenBSD/4.3/packages/i386/xemacs-21.4.19p5.tgz
Friday, May 30, 2008
A program to measure integer, multi-threading and context-switching performance. More info at
www.alexeilebedev.com/benchmark
You can see for yourself how slow your Intel quad core chip really is.
Wednesday, May 28, 2008
Opteron 285 context switch twice as fast as Xeon E5345
Here is a test I ran under Windows XP 64-bit on two machines: one with 2 dual-core opterons @ 2.6 Ghz, the other with two Xeon E5345 (Core quad @ 2.33 ghz)
Two thread, each sits in WaitForMultipleObjectsEx with an infinite timeout.
We call QueueUserAPC(threadA, funcA)
funcA() {
QueueUserAPC(threadB, funcB);
}
funcB() {
QueueUserAPC(threadA, funcA);
}
This basically measures context-switching speed. When run in a single thread on the AMD machine, each function gets invoked about 550K times/sec.
With two threads, the AMD machine does 140K calls/sec for each function.
The Intel machine can only pull in 62K calls/sec.
Now, this is a very realistic kind of load for a multi-threaded app. A real-life app is not spending all of its time doing integer math, but in fact it does do a lot of context switching, if only to/from the OS. When every 62 context switches eat up a whole millisecond of your time, it's very hard to build something both multi-threaded and fast.
By the way, in case you wonder, this IS the fastest way to force a context switch that I know of. I tried passing a 1-byte token between two anonymous pipes (i.e. each read does a blocking read of 1 byte, and then writes it back for the other thread to receive). The results are not inspiring -- 80K tokens each thread on AMD, even fewer on Xeon.
More numbers: Dual AMD 285 (2x dual core, 2.6 ghz), Windows 32-bit: 105K calls/sec.
Dual Xeon X5365 (2x quad core, 3.00ghz): 101K calls/sec
The conclusion: something about Intel processors really makes context switching slow.
Another conclusion: whenever possible, use x64 code. It's not the "large memory" benefit, it's the extra 8 registers that will make your code fly.
Friday, November 16, 2007
Monday, October 29, 2007
syncdir.exe
Usage: syncdir <A> <B> (download)
You give it two directories, and it copies everything from directory A to directory B (based on file dates).
One optimization involves only reading each directory listing once (makes a big difference for network drives).
Another is that the smallest files are copied first. This tends to move more useful per data per unit of time than dumb depth-first copiers. Why? Imagine a huge file sitting early in the directory tree, that's going to block your transfer. Now you have to figure out how to skip it -- pain in the ass. Sorting by size avoids this.
Examples of other options: /ext:.cpp;.h or /noext:.obj;.dll
addpath.exe
Usage: addpath <path> (download)
Modifies a variable in the registry (HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Environment\Path) to include your path.
Any program that's already running won't see the change, so you'd have to restart it.
Other path modifiers are either hard to use (they come with gazillion options so you actually have to study them for longer than it takes to do this with the Windows UI), or fuck up the key by not expanding %SYSTEMDIR% substrings in it and storing the result as REG_SZ. Also they don't check for duplicates and happily add your path to the search path multiple times.
Limitations of user interfaces.
I'm against user interfaces and for visualizations. I'll define the terms now. An interface is supposed to show you the whole thing and define your interaction. A visualization assumes you have the structure in your head (at least on some level of detail), and tries to show you a piece of that structure in more detail.
A visualization can have controls in it that affect the underlying thing, which, you might argue, makes it an interface. But the difference is that an interface is at the "root", and describes everything, and a visualization is a "leaf".
The reason I got to think about this is that there is not a single useful graphical programming language out there. There must be a reason for that. Actually, the reason is not hard to find. These guys waste valuable space on the screen arranging boxes. While they do that, they inadvertently engage the programmer's obsessive-compulsive instinct in a counterproductive way, making said programmer rearrange the boxes until they look good. For even the simplest problems (bubble sort) that's impossible. [If someone points me to a beautiful-looking bubble sort implemented in a visual language, I'll change this to "heap sort" to hopefully stump them for good :-)].
Supposing that you could arrange flow control boxes in a way that makes a heap sort look good, will they really reflect the true nature of heap sort, its tree-like, jumpy nature? No, that has to be visualized in the mind's eye. But the mind's eye is not an independent organ, it requires the physical eye not to be actively busy. So a graphical language actively blocks you from thinking. Here I'm presuming that programmers think visually, and I would say that's very much the case.
So, unless a direct visualization (which respects the structure of the problem) is possible, it's better to stay away from the visual stuff so as not to preclude the human from creating that visualization inside their head.