Thursday, March 20, 2008

Good Info on syslogd

Sysklogd provides two system utilities which provide support for system logging and kernel message trapping. Support of both internet and unix domain sockets enables this utility package to support both local and remote logging.

System logging is provided by a version of syslogd(8) derived from the stock BSD sources. Support for kernel logging is provided by the klogd(8) utility which allows kernel logging to be conducted in either a standalone fashion or as a client of syslogd.

Syslogd provides a kind of logging that many modern programs use. Every logged message contains at least a time and a hostname field, normally a program name field, too, but that depends on how trusty the logging program is.

Remote Network support

These modifications provide network support to the syslogd facility. Network support means that messages can be forwarded from one node running syslogd to another node running syslogd where they will be actually logged to a disk file.

To enable this you have to specify the -r option on the command line. The default behavior is that syslogd won't listen to the network.

The strategy is to have syslogd listen on a unix domain socket for locally generated log messages. This behavior will allow syslogd to inter-operate with the syslog found in the standard C library. At the same time syslogd listens on the standard syslog port for messages forwarded from other hosts. To have this work correctly the services(5) files (typically found in /etc) must have the following entry:

syslog 514/udp

If this entry is missing syslogd neither can receive remote messages nor send them, because the UDP port cant be opened. Instead syslogd will die immediately, blowing out an error message.

To cause messages to be forwarded to another host replace the normal file line in the syslog.conf file with the name of the host to which the messages is to be sent prepended with an @.

For example, to forward ALL messages to a remote host use the following syslog.conf entry:

# Sample syslogd configuration file to
# messages to a remote host forward all.
*.* @hostname

To forward all kernel messages to a remote host the configuration file would be as follows:

# Sample configuration file to forward all kernel
# messages to a remote host.
kern.* @hostname

If the remote hostname cannot be resolved at startup, because the name-server might not be accessible (it may be started after syslogd) you don't have to worry. Syslogd will retry to resolve the name ten times and then complain. Another possibility to avoid this is to place the hostname in /etc/hosts.

With normal syslogds you would get syslog-loops if you send out messages that were received from a remote host to the same host (or more complicated to a third host that sends it back to the first one, and so on). In my domain (Infodrom Oldenburg) we accidently got one and our disks filled up with the same single message. :-(

To avoid this in further times no messages that were received from a remote host are sent out to another (or the same) remote host anymore. If there are scenarios where this doesn't make sense, please drop me (Joey) a line.

If the remote host is located in the same domain as the host, syslogd is running on, only the simple hostname will be logged instead of the whole fqdn.

In a local network you may provide a central log server to have all the important information kept on one machine. If the network consists of different domains you don't have to complain about logging fully qualified names instead of simple hostnames. You may want to use the strip-domain feature -s of this server. You can tell the syslogd to strip off several domains other than the one the server is located in and only log simple hostnames.

Using the -l option there's also a possibility to define single hosts as local machines. This, too, results in logging only their simple hostnames and not the fqdns.

The UDP socket used to forward messages to remote hosts or to receive messages from them is only opened when it is needed. In releases prior to 1.3-23 it was opened every time but not opened for reading or forwarding respectively.

OUTPUT TO NAMED PIPES (FIFOs)
This version of syslogd has support for logging output to named pipes (fifos). A fifo or named pipe can be used as a destination for log messages by prepending a pipy symbol (``|'') to the name of the file. This is handy for debugging. Note that the fifo must be created with the mkfifo command before syslogd is started.

The following configuration file routes debug messages from the kernel to a fifo:

# Sample configuration to route kernel debugging
# messages ONLY to /usr/adm/debug which is a
# named pipe.
kern.=debug |/usr/adm/debug

SECURITY THREATS

There is the potential for the syslogd daemon to be used as a conduit for a denial of service attack. Thanks go to John Morrison (jmorriso@rflab.ee.ubc.ca) for alerting to this potential. A rogue program(mer) could very easily flood the syslogd daemon with syslog messages resulting in the log files consuming all the remaining space on the filesystem. Activating logging over the inet domain sockets will of course expose a system to risks outside of programs or individuals on the local machine.

There are a number of methods of protecting a machine:

1.
Implement kernel firewalling to limit which hosts or networks have access to the 514/UDP socket.
2.
Logging can be directed to an isolated or non-root filesystem which, if filled, will not impair the machine.
3.
The ext2 filesystem can be used which can be configured to limit a certain percentage of a filesystem to usage by root only. NOTE that this will require syslogd to be run as a non-root process. ALSO NOTE that this will prevent usage of remote logging since syslogd will be unable to bind to the 514/UDP socket.
4.
Disabling inet domain sockets will limit risk to the local machine.
5.
Use step 4 and if the problem persists and is not secondary to a rogue program/daemon get a 3.5 ft (approx. 1 meter) length of sucker rod* and have a chat with the user in question.

Sucker rod def. --- 3/4, 7/8 or 1in. hardened steel rod, male threaded on each end. Primary use in the oil industry in Western North Dakota and other locations to pump 'suck' oil from oil wells. Secondary uses are for the construction of cattle feed lots and for dealing with the occasional recalcitrant or belligerent individual.

Network Service Processors

By combining the functions of a control-plane processor and a network processor, the Octeon processor can take on data traffic at rates of up to 10 Gbits/s. Developed by Cavium Networks, this processor performs content, security, compression/decompression, and TCP/IP offload functions thanks to an architecture that blends as many as 16 MIPS64 64-bit processor cores, two serial-packet-interface processors (SPI 4.2), and specialized coprocessors.
The Octeon handles data traffic in layers 3 to 7 by supporting firewalls, virtual private networks, and anti-virus and anti-spam features to deter network threats. It supports a wide range of applications, such as HTTP, XML, VoIP, Mail, Chat, MP3, and MPEG. To deal with all of these requirements, the Octeon employs a highly programmable architecture and dedicated on-chip accelerators for TCP/IP, Security (3DES, AES, RAS, DH, Hashing), regular expression handling, and compression/decompression (GZIP).

Each dual-issue 64-bit integer core includes a 32-kbyte instruction cache and an 8-kbyte data cache. All 64-bit cores share a 1-Mbyte, eight-way, set-associative L2 cache with error-checking and correction to ensure data integrity. To feed data into the chip, an on-board double-data-rate (DDR) SDRAM interface (either 72 or 144 bits wide) performs data transfers at up to 800 MHz from external DDR I or DDR II memory. In aggregate, the MIPS processors execute 19.2 billion instructions/s when the chip operates from a 600-MHz clock.

The multiple cores on the Octeon can run a full operating system (such as a multiprocessor Linus OS) and/or run tuned data-plane-like code. Task allocation of the cores is determined by the programmer. The dual-packet I/O processors support IPv4 and IPv6 traffic at up to 10 Gbits/s and perform L2-L4 parsing, error checks, tagging, queuing, and work scheduling.

Dedicated engines packed into the regular-expression processor block accelerate pattern and signature match operations—necessary for anti-virus, IDS, and content processing applications—up to 4 Gbits/s. The dedicated TCP acceleration engine performs hardware-based packet synchronization, timer support, and buffer management to deliver 10 Gbits/s of full TCP termination. And, the compression/decompression processor handles GZIP, PKZIP, and other compression protocols to deliver compressed data at up to 4 Gbits/s.

Four versions of the Octeon processor, with two, four, eight, or 16 MIPS64 cores, will be available. This will let designers better match performance and cost. Versions with two or four MIPS64 cores will use a 72-bit memory interface and come in a 709-lead package. The eight- and 16-core versions pack a 144-bit wide memory interface and come in a 1500-lead package.

In 10,000-unit lots, the Octeon processors will cost from $125 to $750 each.

Wednesday, March 19, 2008

How PCI Works (in brief)

The acronym PCI stands for Peripheral Component Interconnect, which aptly describes what it does. PCI was designed to satisfy the requirement for a standard interface for connecting peripherals to a PC, capable of sustaining the high data transfer rates needed by modern graphics controllers, storage media, network interface cards and other devices.
Earlier bus designs were all lacking in one respect or another. The IBM PC-AT standard ISA (Industry Standard Architecture) bus, for example, can manage a data transfer rate of 8MB/sec at best. In practice, the throughput that can be sustained is much less than that. Other factors, like the 16-bit data bandwidth and 24 bit wide address bus - which restricts memory mapped peripherals to the first 16MB of memory address space - made the ISA bus seem increasingly outmoded .

More recent designs such as IBM's MCA (Micro Channel Architecture) and the EISA (Extended ISA) bus, though having higher bandwidth (32 bits) and providing better support for bus mastering and direct memory access (DMA), were not enough of an improvement over ISA to offer a long term solution. New, faster versions of both MCA and EISA were proposed, but were received without enthusiasm. Neither promised to be an inexpensive solution, and cost has always been an important factor in the competitive PC market.

VL BUS
One attempt to improve bus performance inexpensively was the VL-Bus. Prior to this, some PC vendors had started providing proprietary local bus interfaces enabling graphics boards to be connected directly to the 486 processor bus. These systems had one major failing: since the interfaces were proprietary, support for them by third-party peripheral vendors was limited, so there was little chance that a user would ever be able to purchase a compatible graphics card as an upgrade.

VESA's intervention gave manufacturers a standard to work to. The standard was based on existing chip sets. This had the advantages of low cost, and of enabling the technology to be got to market quickly. The disadvantage was the rather crude implementation. The VL-Bus was a success in its day because it met a need, but it never looked like a long term solution.

Some of the parties involved in the design of the VL-Bus standard felt that a solution based on existing bus technologies had too many design compromises to be worth considering. This group, led by Intel, split off to form the PCI Special Interest Group with the aim of producing a new bus specification from scratch.

Although PCI has been described as a local bus, in fact it is nothing of the sort. The term 'local bus' means that the address, data and control signals are directly connected - in other words, 'local' - to the processor. The VL-Bus is a true local bus, since devices are connected to the CPU via nothing more than some electrical buffering. This is one of the reasons for its simplicity, but it is also the reason for many of its limitations.

One problem is that a local bus is by definition synchronous. The bus speed is the same as the external processor clock speed. Expansion card vendors therefore have the difficulty of ensuring that their products will run at a range of speeds. The upper limit of the range cannot be defined, and is liable to increase as new processors are introduced. This is a recipe for compatibility problems. Such problems have been experienced by users of VL-Bus systems using the AMD 80MHz processors, which have a 40MHz bus clock.

The second problem with a true local bus is that the electrical load (and consequently the number of expansion slots) that can be driven by the bus decreases as the clock speed increases. This creates the situation where typically three slots can be provided at 33MHz, but only two at 40MHz and just one at 50MHz. This is particularly awkward given that most motherboards are designed to work at a range of clock speeds and usually come with three slots. Many manufacturers simply ignored the problem.


PCI DESIGN

PCI's designers decided to avoid these difficulties altogether by making PCI an asynchronous bus. The top speed for most PCI cards is 33MHz. The PCI 2.1 specification made provision for a doubling of the speed to 66MHz, but support for this higher speed was optional.

At 33MHz, with a 32-bit data bus, the theoretical maximum data transfer rate of the PCI bus is 132MB/sec. At 66MHz, with a 64-bit data path, the top speed would be 528MB/sec.

The PCI bus can run at lower speeds. In a system clocked at 25MHz, for example, the bus could also run at this speed. This was an important consideration at the time PCI was being developed.

Peripherals must be designed to work over the entire range of permitted speeds. In the original PCI specification the lower limit to the speed range was given as 16MHz; in PCI revision 2.0 this was reduced to 0MHz. This supports 'green' power saving modes by allowing the system to run at reduced speed for lower power consumption, or to be put into 'suspend' mode (0MHz), without any bus status information being lost.

The number of devices on a PCI bus depends on the load. In practice this means three or four slots, plus an on-board disk controller and a secondary bus. Up to 256 PCI busses can be linked together, though, to provide extra slots, so this is not really a limitation.

INTERRUPT HANDLING
The concept of 16 discrete IRQ lines, each uniquely assigned to a device, is peculiar to the ISA bus and its derivatives. The CPU sees only a single interrupt signal, obtains an interrupt vector address and then processes the interrupt routine at that address. The use of 16 lines was the method chosen by the designers of the original IBM PC to tell the interrupt controller which address to supply.

Each PCI slot has four interrupt lines connected to it, designated INTA# to INTD#. The first (or only) interrupt-using function on a PCI board must be connected to INTA#. The other three lines allow up to four functions to be combined on one board using INTA# - INTD# in that order.

The PCI interrupt lines and the output from the ISA interrupt controller are combined in a programmable interrupt router, which generates the single interrupt signal for the CPU. How they are combined is not defined by the PCI specification. PCI interrupts are edge-triggered and therefore shareable, so some of them may be connected together.

The IBM PC architecture expects particular devices to use particular IRQs (e.g. the primary disk controller must use IRQ14). Furthermore, because ISA interrupt lines cannot be shared, PC interrupt routines expect that when they are called, they are servicing their specific device and no other.

This means that in a PC, the INTx# lines in each PCI slot - or those that are being used - must each be mapped to a separate IRQ which the operating system or driver software will expect the device in that slot to use. This is usually done using the BIOS Setup utility. Some early PCI systems which did not have this facility required an ISA 'paddle-board' to be used with add-ins like caching disk controllers to ensure they were connected to the appropriate IRQ.

Integrated (on-board) devices are hard-configured to use the appropriate interrupts. Were it not for the fact that specific devices must use specific IRQs, PCI configuration would be completely automatic as the interrupt level could be assigned by the system at start-up.

The PCI bus was an expansion bus designed to meet the requirements of PC users now and for the foreseeable future,. With its high speed, 64-bit data bandwidth and wholehearted support for bus mastering and burst mode data transfers, its maximum throughput is unlikely to become a bottleneck for some time. And its processor independence will be a valuable asset as our PCs move further away from the limitations of the '80s Intel x86 architecture.

From a support point of view, PCI's plug-and-play ambitions are welcome. We are never likely to see completely automatic configuration, nor get away from such restrictions as 16 interrupt request lines, while we continue to demand PC compatibility. But PCI, which inherits none of these limitations, is a step in the right direction.

ReadThatWords.com

ReadThatWords.com is an online tool that can instantly convert Web Pages, Office documents and PDF files into MP3 files.

While there are other free tools that can convert text to MP3, ReadThatWords offers two distinct advantages:

mp3-voice 1. It can turn almost anything into MP3 including PowerPoint slides, Word Docs, PDF, HTML web pages and even RSS feeds.

2. You have an array of different voices to choose from - like a US male voice, UK female voice or a voice that has an Indian accent.

And if you are converting a large document to an MP3, you need not wait in the browser for conversion to finish - ReadThatWords.com will send you an email when the recording is done and MP3 is ready to download.

readthewords.com - The service is primarily created for people with disabilities but others will find it useful as well. Thanks Jane.

Secure Your Linux Computer in Network

What you should do to a new Linux PC before connecting it to the Internet. Always keep the software on your computer up to date with the latest security patches should you be running Linux, Windows, BSD or WhoKnowsWhat. Your distribution will release regular security patches that should be applied and are available off the Internet. As with Windows, this should always be your first Internet destination. Your second Internet destination may be to install system monitoring software.

Configuring the /etc/hosts.deny and /etc/hosts.allow files

To further secure this server from unwanted traffic or potentially hackers, we may wish to limit the hosts or computers that can connect to this server application. The /etc/hosts.deny and /etc/hosts.allow files allow us to do just that.

When a computer attempts to access a service such as a secure shell server on your new Linux PC the /etc/hosts.deny and /etc/hosts.allow files will be processed and access will be granted or refused based on some easily configurable rules. Quite often for desktop Linux PC's it is very useful to place the following line in the /etc/hosts.deny file:
ALL: ALL

This will deny access to all services from all hosts. It seems pretty restrictive at first glance, but we then add hosts to the /etc/hosts.allow file that will allow us to access services. The following are examples that allow some hosts remote secure shell access:
sshd: 192.168.0.1 #allow 192.168.0.1 to access ssh
sshd: somebox.somedomain.com #allow somebox.somedomain.com to access ssh

These two files provide powerful host based filtering methods for your Linux PC.

If your new Linux PC has some services that will receive connections from the Internet make sure you understand their configurations and tune them as necessary. For example, if your Linux PC will receive secure shell connections make sure you check the sshconfig file (for Mandriva it is /etc/ssh/sshd_config) and disable options like root login. Every Linux PC has a root user so you should disable root login via ssh in order to dissuade brute force password crack attempts against your super-user account.

Unlike Windows, Linux does not present itself as a "server" version or as a "desktop" version. During a typical installation of Linux the choice is yours as to exactly what software you wish to install and therefore exactly what type of a system you are constructing. Because of this, you need to be aware of the packages that the installation program is installing for you.

Install and configure a software firewall

A local software firewall can provide a "just in case" layer of security to any type of network. These types of firewalls allow you to filter the network traffic that reaches your PC and are quite similar to the Windows Firewall. The Mandriva package called Shorewall along with a component of the Linux kernel called Netfilterprovides a software firewall. By installing and configuring Shorewall during the installation process, you can restrict or block certain types of network traffic, be it coming to or going out from your PC.

Blocking or allowing network traffic is one layer of security, but how do you secure a service that you do allow the Internet or your intranet to connect to? Host based security is yet another layer.

The Linux kernel itself can provide some additional networking security. Familiarize yourself with the options in the /etc/sysctl.conf file and tune them as needed. Options in this file control, for example, what type of network information is logged in your system logs.

Connect the PC to a router

A hardware router provides multiple PC's to share one visible or external Internet address. This is generally bad news for any hacker or otherwise malicious program that may take a look at your new Linux PC as it blocks any and all network traffic that you don't specifically allow. Home networking routers are just smaller versions of what the big companies use to separate their corporate infrastructure from the Internet.

Services that are not running don't provide security holes for potential hackers and don't take up those precious CPU cycles. Shut them off.

Thursday, March 6, 2008

A small Description of GCOV

Gcov is a test coverage program. Use it in concert with GCC to analyze
your programs to help create more efficient, faster running code and to
discover untested parts of your program. You can use gcov as a profil-
ing tool to help discover where your optimization efforts will best
affect your code. You can also use gcov along with the other profiling
tool, gprof, to assess which parts of your code use the greatest amount
of computing time.

Profiling tools help you analyze your code’s performance. Using a pro-
filer such as gcov or gprof, you can find out some basic performance
statistics, such as:

· how often each line of code executes

· what lines of code are actually executed

· how much computing time each section of code uses

Once you know these things about how your code works when compiled, you
can look at each module to see which modules should be optimized. gcov
helps you determine where to work on optimization.

Software developers also use coverage testing in concert with test-
suites, to make sure software is actually good enough for a release.
Testsuites can verify that a program works as expected; a coverage pro-
gram tests to see how much of the program is exercised by the test-
suite. Developers can then determine what kinds of test cases need to
be added to the testsuites to create both better testing and a better
final product.

You should compile your code without optimization if you plan to use
gcov because the optimization, by combining some lines of code into one
function, may not give you as much information as you need to look for
‘hot spots’ where the code is using a great deal of computer time.
Likewise, because gcov accumulates statistics by line (at the lowest
resolution), it works best with a programming style that places only
one statement on each line. If you use complicated macros that expand
to loops or to other control structures, the statistics are less help-
ful---they only report on the line where the macro call appears. If
your complex macros behave like functions, you can replace them with
inline functions to solve this problem.

gcov creates a logfile called sourcefile.gcov which indicates how many
times each line of a source file sourcefile.c has executed. You can
use these logfiles along with gprof to aid in fine-tuning the perfor-
mance of your programs. gprof gives timing information you can use
along with the information you get from gcov.

gcov works only on code compiled with GCC. It is not compatible with
any other profiling or test coverage mechanism.