Kevin Warrington

Loading clever subtitle...

SSH Config and Agent Forwarding

Quick guide to setting up ssh config and agent forwarding.

1. Setup remote server

Enable authorized keys on remote, /etc/sshd_config:

RSAAuthentication yes
PubkeyAuthentication yes
AuthorizedKeysFile  .ssh/authorized_keys

2. Setup client keys

Generate an ssh key

ssh-keygen -t rsa

Copy public key to remote server

ssh-copy-id -i ~/.ssh/id_rsa.pub user@example.com

Test connection using private key

ssh -i ~/.ssh/id_rsa user@example.com date

3. Setup client config

This allows for separate ssh configuration per host:

touch ~/.ssh/config
chmod 600 ~/.ssh/config

Add the following to ~/.ssh/config:

Host remoteServer1
HostName example.com
User user
PubkeyAuthentication yes
IdentityFile ~/.ssh/id_rsa

Test the connection:

ssh remoteServer1 date

4. Setup agent forwarding

ssh-agent is a user daemon which holds unencrypted ssh keys in memory. Key challenges are sent from a remote machine, through any intermediary servers and back to your local machine. This saves you from having to store your private keys on remote servers.

Turn on agent forwarding for your host, ~/.ssh/config:

Host remoteServer1
...
ForwardAgent yes

Verify ssh-agent is running:

echo "$SSH_AUTH_SOCK"

Verify you have an identity loaded:

ssh-add -L

If not, add an identity:

ssh-add ~/.ssh/id_rsa

Login, logout, login to remote, first login requires passphrase. Subsequent logins do not:

ssh remoteServer1
exit
ssh remoteServer1

Lock your agent when you are away:

ssh-add -x

Unlock your agent when you are back:

ssh-add -X

Delete all keys from your agent:

ssh-add -D

Resources:

http://www.unixwiz.net/techtips/ssh-agent-forwarding.html
https://developer.github.com/guides/using-ssh-agent-forwarding/
http://nerderati.com/2011/03/17/simplify-your-life-with-an-ssh-config-file/
https://kimmo.suominen.com/docs/ssh/
http://blogs.perl.org/users/smylers/2011/08/ssh-productivity-tips.html
http://www.symantec.com/connect/articles/ssh-and-ssh-agent

Clear the DNS Cache

You’ve updated /etc/hosts and your changes aren’t reflected in Google Chrome. Try clearing the DNS cache.

Chrome

Navigate to chrome://net-internals/#dns and click “Clear Host Cache”
Navigate to chrome://net-internals/#sockets and click “Flush Socket Pools”

Mac OS X v10.6

sudo dscacheutil -flushcache

Mac OS X v10.7+

sudo killall -HUP mDNSResponder

Keeping Your Git Fork Up-to-date

When your fork falls behind, and it will, here’s how to quickly sync it up with master.

First, clone your forked repo, if you havent already:

git clone git@github.com:<fork>/<repo>.git
cd drush

Then, merge upstream and push to github:

git remote add upstream git@github.com:<original>/<repo>.git
git fetch upstream
git checkout master
git merge upstream/master
git push origin master

Ignorning Ri and RDoc During Gem Install

I personally never use the ri (Ruby Index) and RDoc (Ruby Documentation).

To prevent them from installing during gem install, just add this line to your ~/.gemrc or /etc/gemrc:

gem: --no-rdoc --no-ri

Deleting MySQL Bin Files

To view your current bin files:

$ mysql -u root -p
mysql> SHOW MASTER LOGS;

To clear all logs but the last one:

mysql> PURGE MASTER LOGS TO 'mysql-bin.000107';

Open my.cnf and comment out the following lines to prevent logging in the future:

# log-bin=mysql-bin

Restart your server and confirm logging is now disabled:

$ mysql.server restart
$ mysql -u root -p
mysql> SHOW MASTER LOGS;

Non-Blocking MySQL Database Export for InnoDB Tables

To quickly dump a large InnoDB database to file without locking it up:

mysqldump --single-transaction --quick -u webuser -h example.com 'dbname' > dbname.sql

This will issue a START TRANSACTION and as long as the following commands are not issued before your export completes, you will have a perfect snapshot:

ALTER TABLE, CREATE TABLE, DROP TABLE, RENAME TABLE, TRUNCATE TABLE

MyISAM or MEMORY tables dumped while using this option may still change state.

Update Your Locate Database

The locate command is great for searching the entire filesystem for files:

locate my.cnf

Recently created files and directories might not show up, so update the index:

locate updatedb
/usr/libexec/locate.updatedb

Shell Modes and Init Files

Modes

There are two main shell modes:

1. Login

When a user logs in with a non-graphical interface or SSH.

2. Interactive

When a user has a prompt and standard in/out are connected to the terminal.

Combinations of Modes

A shell can be initialized with the following mode combinations:

Login + Interactive

You will be forwarded to the users home directory, with the users environment.

  • log in to a remote system via SSH
  • new terminal tab, Mac OS X
  • sudo su -

files sourced:

# The systemwide initialization file
/etc/profile

# The personal initialization files, first one found, in order
~/.bash_profile
~/.bash_login
~/.profile

Non-login + Interactive

You will stay in the current directory, but will have the users environment.

  • new terminal tab, linux
  • start new shell process ($ bash)
  • execute script remotely and request terminal (ssh user@host -t ‘echo $PWD’)
  • sudo su

files sourced:

# The individual per-interactive-shell startup file
~/.bashrc

Non-login + Non-Interactive

You will stay in the current directory and keep your current environment.

  • run an executable with #!/usr/bin/env bash shebang
  • run a script ($ bash test.sh)
  • execute script remotely (ssh user@host ‘echo $PWD’)

files sourced:

source $BASH_ENV

References:

Executing Shell Scripts

Here are some of the basic ways you can execute scripts on the command line.

source or . will read and execute commands from filename in the current shell environment.
Any environment variables set within the script will remain after exit.

. test.sh
source test.sh

sh or bash will fork a new shell with the specified interpreter.

sh test.sh
bash test.sh

./ will also fork a new shell, but the file needs to be set as executable and interpreter will be derived from the shebang (#!/bin/sh).

chmod +x test.sh
./test.sh

Apache and or Nginx

Apache

Apache was designed to process web requests in a consistent and reliable manner.

It uses a process-driven architecture and handles one connection per-process or per-thread.

Apache comes with 2 multi processing modules (MPM), which bind to ports, accept requests and spawn processes:

  1. Prefork MPM

    Uses multiple child processes with one thread each. Each process handles one connection at a time.

  2. Worker MPM

    Uses multiple child processes with many threads each. Each thread handles one connection at a time. If you want to use mod_php (not thread safe), it is recommended that you also use FastCGI, so that php is running in its own memory space.

Apache does not scale well under high server load, often consuming large amounts of RAM and CPU.

With Apache, it is easy to configure complex setups, it has excellent documentation and abundant module availability.

Nginx

Nginx was designed to solve the c10k problem, and as a result, is a highly performant http and reverse proxy server.

It uses an event-driven architecture and handles multiple connections in a single event loop.

Nginx is very good at serving static content, which accounts for 80 - 95% of website requests. It scales very well under high server load, and is not dependent on the underlying server hardware.

Compared to other more established web servers, complex site configuration is typically more difficult with Nginx, due to lightweight design. It also lacking in documentation and module support.

Some notable sites using Nginx include:

  • github
  • wordpress
  • pinterest
  • netflix
  • cloudflare

Both?

Nginx can be used as a reverse proxy, in front of Apache, handling all requests for static content. All other requests are forwarded to Apache via proxypass. You will need to install mod_rpaf, so Apache can pick up the X-Real-IP header provided by nginx. This will make it seem as though Apache handled the original request.