Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

December 06 2013

Prosody with authentification against LDAP/ActiveDirectory

I am using

  • Prosody v0.9.1
  • sasl2-bin v2.1.25
  • Debian 8/jessie

you need several packages:

apt-get update ; apt-get install sasl2-bin libsasl2-modules-ldap lua-ldap lua-cyrussasl

and configs:

/etc/default/saslauthd

START=yes
MECHANISMS='ldap'
MECH_OPTIONS='/etc/saslauthd.conf'

/etc/saslauthd.conf

ldap_servers: ldap://ldap.example.com/
ldap_search_base: ou=foo,dc=example,dc=com

ldap_bind_dn: ldap-user-for-binding
ldap_bind_pw: pw-for-that-user
ldap_use_sasl: no
ldap_start_tls: no
ldap_auth_method: bind

ldap_filter: (sAMAccountName=%u)

/etc/prosody/prosody.cfg.lua

authentication = 'cyrus'
cyrus_service_name = 'xmpp'

-- eventually configure SSL properly
ssl = {
        key = 'x';
        certificate = 'y';

        options = { 'no_sslv2', 'no_sslv3' , 'no_ticket', 'no_compression' };
        ciphers = 'HIGH:!DSS:!aNULL@STRENGTH!:!DES-CBC3-SHA:!ECDHE-RSA-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA';
}

Add the system-user ‘prosody’ to the ‘sasl’-group and restart both services:

adduser prosody sasl ; service saslauth restart ; service prosody restart

Eventually have a look at /var/log/auth.log for sasl-problems or the prosody-logs.

The post Prosody with authentification against LDAP/ActiveDirectory appeared first on nur Bahnhof.

flattr this!

November 20 2013

a tale of fail and win (image recovery/management under linux)

  1. use git-annex-assistant to create backups on several destinations
    1. use test-repo first, do some tests
    2. try on smaller directories w/ actually valuable data, create backups first
    3. annex-ize several other directories
    4. remove picture-backup from external HDD to make space for new backup via git annex (very bad ida)
    5. annex-ize several GB of pictures dating back to 2004 (RAWs and JPGs)
    6. fail somehow several times, remove .git directory start anew
    7. (do some other stuff)
    8. get back to the picture-dir, realize that it is empty (besides some folders), .git directory contains nothing
  2. use ntfsundelete, and some proprietary tools to recover (only marked as) deleted files from the ntfs volume (900 GB)
    • use git annex fsck on the recovered .git data, get only some pictures back, not very much (about 2k files)
  3. use photorec on several runs to recover .jpg and .cr2 (RAW) data
  4. try to use picasa on the files to get some sorting (and kick out unwanted data as images from games etc.)
    • picasa somehow mangles the raw-files :(
    • picasa does not properly use the exif-provided file-creation date, but a mixture of that and the files’ date w(
  5. fiddle around with exiftool to get back the timestamp from the files’ exif-data
    find . -type f -name '*.jpg' -exec exiftool  -FileModifyDate\<DateTimeOriginal {} \;
  6. try digikam
    1. somehow works
    2. slow on previews when using ‘import from files’
    3. slow on DB handling
    4. hangs itself when moving about 6k (?) files from one folder to another
    5. switch to MySQL as backend
      • somehow fail, try google
      • realize that the internal MySQL server won’t do, install external one
      • use ‘settings’->’Database migration’ before switching via the config
    6. speed is better
    7. use the duplicate detection to remove redundant files (takes time …)

The post a tale of fail and win (image recovery/management under linux) appeared first on nur Bahnhof.

flattr this!

January 23 2012

October 04 2011

finkregh

Here are two interesting links I found comparing the features and performance differences between using Unix Domain Sockets and TCP Loopback Sockets

http://lists.freebsd.org/pipermail/freebsd-performance/2005-February/001143.html

Excerpt: IP sockets over localhost are basically looped back network on-the-wireIP. There is intentionally “no special knowledge” of the fact that the connection is to the same system, so no effort is made to bypass the normal IP stack mechanisms for performance reasons. For example, transmission over TCP will always involve two context switches to get to the remote socket, as you have to switch through the netisr, which occurs following the “loopback” of the packet through the synthetic loopback interface. Likewise, you get all the overhead of ACKs, TCP flow control, encapsulation/decapsulation, etc. Routing will be performed in order to decide if the packets go to the localhost. Large sends will have to be broken down into MTU-size datagrams, which also adds overhead for large writes. It’s really TCP, it just goes over a loopback interface by virtue of a special address, or discovering that the address requested is served locally rather than over an ethernet (etc).

UNIX domain sockets have explicit knowledge that they’re executing on the same system. They avoid the extra context switch through the netisr, and a sending thread will write the stream or datagrams directly into the receiving socket buffer. No checksums are calculated, no headers are inserted, no routing is performed, etc. Because they have access to the remote socket buffer, they can also directly provide feedback to the sender when it is filling, or more importantly, emptying, rather than having the added overhead of explicit acknowledgement and window changes. The one piece of functionality that UNIX domain sockets don’t provide that TCP does is out-of-band data. In practice, this is an issue for almost noone.

http://osnet.cs.binghamton.edu/publications/TR-20070820.pdf

Excerpt: It was hypothesized that pipes would have the highest throughtput due to its limited functionality, since it is half-duplex, but this was not true. For almost all of the data sizes transferred, Unix domain sockets performed better than both TCP sockets and pipes, as can be seen in Figure 1 below. Figure 1 shows the transfer rates for the IPC mechanisms, but it should be noted that they do not represent the speeds obtained by all of the test machines. The transfer rates are consistent across the machines with similar hardware configurations though. On some machines, Unix domain sockets reached transfer rates as high as 1500 MB/s.

Unix domain sockets vs TCP Sockets
Tags: linux tcp socket

October 01 2011

moreutils

combine: combine the lines in two files using boolean operations ifdata: get network interface info without parsing ifconfig output ifne: run a program if the standard input is not empty isutf8: check if a file or standard input is utf-8 lckdo: execute a program with a lock held mispipe: pipe two commands, returning the exit status of the first parallel: run multiple jobs at once pee: tee standard input to pipes sponge: soak up standard input and write to a file ts: timestamp standard input vidir: edit a directory in your text editor vipe: insert a text editor into a pipe zrun: automatically uncompress arguments to command

moreutils

combine: combine the lines in two files using boolean operations ifdata: get network interface info without parsing ifconfig output ifne: run a program if the standard input is not empty isutf8: check if a file or standard input is utf-8 lckdo: execute a program with a lock held mispipe: pipe two commands, returning the exit status of the first parallel: run multiple jobs at once pee: tee standard input to pipes sponge: soak up standard input and write to a file ts: timestamp standard input vidir: edit a directory in your text editor vipe: insert a text editor into a pipe zrun: automatically uncompress arguments to command

moreutils

combine: combine the lines in two files using boolean operations ifdata: get network interface info without parsing ifconfig output ifne: run a program if the standard input is not empty isutf8: check if a file or standard input is utf-8 lckdo: execute a program with a lock held mispipe: pipe two commands, returning the exit status of the first parallel: run multiple jobs at once pee: tee standard input to pipes sponge: soak up standard input and write to a file ts: timestamp standard input vidir: edit a directory in your text editor vipe: insert a text editor into a pipe zrun: automatically uncompress arguments to command

September 15 2011

Ali Abbas » Linux Kernel Route Cache

To understand the importance of the routing cache, it is important to keep in mind and visualize the 3 main routing hash tables in use in the kernel for routing decisions… the Route Cache (what we will be discussing), the Route Policy Database and the Route Table. It is also in this order that the network subsystem queries the tables to make a forwarding decision. To display the “Route Cache”, one could simply issue the “ip route show cache” command.

August 29 2011

July 22 2011

Zwei-Faktor-Authentifizierung mit Google Authenticator und des kleinen Mannes Nucular Code Sealed Authenticator System - netzsheriff.de

Google hat letztes Jahr fuer Google Apps Accounts optionale Zwei-Faktor-Authentifizierung, oder, wie sie es nennen, "Bestaetigung in zwei Schritten", eingefuehrt. Zusaetzlich zu dem was man weiss, seinem Passwort, benutzt man fuer die Authentifizierung beim Login noch etwas dass man besitzt: (s)ein Handy. Dazu bekommt man entweder von Google eine SMS mit einer pseudozufaelligen Ziffernfolge, oder diese wird von einer App auf dem Telefon alle 30 Sekunden neu erzeugt. Entsprechende Apps gibt es fuer Android, iPhone und Blackberry.

Zusaetzlich bietet Google ein PAM Modul an,[...]

July 20 2011

How to kill a TCP connection using netstat

You cannot kill a TCP connection using netstat utility. netstat is use for

Display network connections
Routing tables
Interface statistics
Masquerade connections
Multicast memberships
And much more

However Linux support two other commands or utility that can be used to kill a TCP connection.

June 20 2011

May 29 2011

finkregh
After a relatively long road traveled with a few bumps along the way, as of yesterday, Linus's mainline tree (2.6.39+) contains literally every component needed for Linux to run both as a management domain kernel(Dom0) and a guest(DomU).
[...]
Linux mainline contains all the Xen code bits for Dom0 and DomU support (Wim Coekaerts Blog)
Tags: linux xen

March 17 2011

February 24 2011

January 29 2011

QuickTun - Qontrol.nl Wiki

QuickTun is probably the simplest VPN tunnel software ever, yet it's very secure. It relies on the NaCl encryption library by D. J. Bernstein.

QuickTun uses the curve25519xsalsa20poly1305 crypto-box functionality of the NaCl library for secure public-key encryption.

And that's about all QuickTun does; encrypting and sending data. No fancy features which would only lead to bloating the binary. In fact, QuickTun itself has only a few hundred lines of pure C code, making it dead simple to maintain, analyze, debug and fix.

January 20 2011

December 25 2010

December 22 2010

Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.

Don't be the product, buy the product!

Schweinderl