Skip to content


Restore a single MySQL table from a mysqldump (gzip)

So you have a large gzipped backup and you only need one table from that backup. What do you do? I used to just import the whole backup to another server first and then copy over the data, but that was very slow. Here’s a handy solution I found that will extract the required SQL for the table in question for importing:

zcat my_database_backup.gz | sed -n -e '/DROP TABLE IF EXISTS `my_table`/,/UNLOCK TABLES/p' > my_table.sql

Note that you can also use “gzip -cd” instead of “zcat”.

If the resulting file will be large, you can also gzip it.

zcat my_database_backup.gz | sed -n -e '/DROP TABLE IF EXISTS `my_table`/,/UNLOCK TABLES/p' | gzip > my_table.sql.gz

Importing the gzipped SQL-file is easy.

zcat my_table.sql.gz | mysql -u USER -p DATABASE

Posted in Bash, Database, Unix.


Samba4 (A global catalog (GC) cannot be contacted)

I’ve been experimenting with Samba4 as an Active Directory Controller. I got it running and was able to join with a Windows 7 workstation. I did however run into some problems. Here they are with their solutions:

  • Problem: When clicking “Member of” in Active Directory Users and Computers->#domain#->Users->#username# you get the error message “A global catalog (GC) cannot be contacted. A GC is needed to list the object’s group memberships. …”
    Solution: Make sure you have the Active Directory Controller in the list of DNS servers on the client before your other DNS servers/router.
  • Problem: Joining the domain fails.
    Solution: Use the realm part of the domain. If your domain is “MYDOMAIN.LOCAL” then join using “MYDOMAIN”.

Posted in Network.

Tagged with , , , .


Galera Cluster SST problem with xtrabackup

I ran into a problem when one of my Galera nodes had to do a SST state transfer. I really struggled with it because the logs were not indicating at all what the problem was. Finally I got lucky and figured it out. It seems I was missing the tmpdir parameter from my.cnf. Although MySQL, or in this case MariaDB was able to use the system default /tmp, wsrep or xtrabackup was not. There was a /tmp/percona-version-check file that was only writable and readable by root. Deleting that file solved the problem. Here are the cryptic log messages from both the joiner and the donor nodes:

DONOR:

error.log:

WSREP_SST: [ERROR] innobackupex finished with error: 1. Check /var/lib/mysql//innobackup.backup.log (20131211 20:24:52.065)
WSREP_SST: [ERROR] Cleanup after exit with status:22 (20131211 20:24:52.068)
131211 20:24:52 [ERROR] WSREP: Failed to read from: wsrep_sst_xtrabackup –role ‘donor’ –address ‘10.99.0.102:4444/xtrabackup_sst’ –auth ‘root:*HIDDEN*’ –socket ‘/var/lib/mysql/mysql.sock’ –datadir ‘/var/lib/mysql/’ –defaults-file ‘/etc/my.cnf’ –gtid ‘3f679640-4d55-11e3-b061-ebb0a2213f88:118489’
131211 20:24:52 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup –role ‘donor’ –address ‘10.99.0.102:4444/xtrabackup_sst’ –auth ‘root:*HIDDEN*’ –socket ‘/var/lib/mysql/mysql.sock’ –datadir ‘/var/lib/mysql/’ –defaults-file ‘/etc/my.cnf’ –gtid ‘3f679640-4d55-11e3-b061-ebb0a2213f88:118489’: 22 (Invalid argument)
131211 20:24:52 [Warning] WSREP: Could not find peer: 838fbf37-6291-11e3-8a45-67aba358260a
131211 20:24:52 [Warning] WSREP: 1 (sql44): State transfer to -1 (left the group) failed: -1 (Operation not permitted)

innobackup.backup.log:

….
innobackupex: Executing a version check against the server…
Can’t use an undefined value as an ARRAY reference at /usr//bin/innobackupex line 1048.

JOINER:

error.log:

131211 20:17:35 [Note] WSREP: Requesting state transfer: success, donor: 0
tar: This does not look like a tar archive
tar: Exiting with failure status due to previous errors
WSREP_SST: [ERROR] Error while getting data from donor node: exit codes: 0 2 (20131211 20:17:35.775)
WSREP_SST: [ERROR] Data directory /var/lib/mysql/ needs to be empty for SST: Manual intervention required in that case (20131211 20:17:35.778)
WSREP_SST: [ERROR] Cleanup after exit with status:32 (20131211 20:17:35.781)
131211 20:17:35 [Warning] WSREP: 0 (sql5): State transfer to 1 (sql3) failed: -1 (Operation not permitted)
131211 20:17:35 [ERROR] WSREP: gcs/src/gcs_group.c:gcs_group_handle_join_msg():719: Will never receive state. Need to abort.

Posted in Database.

Tagged with , , .


Solution for NFS4 client showing groups as nobody

I ran into a problem where most of my client servers use NFS3, but my new CentOS 6 client servers use NFS4 to connect to the NFS4/3 shares. The problem was what all NFS files showed file groups as “nobody” on the client servers. Luckily I found a Serverfault post about it and was able to fix it: http://serverfault.com/questions/364613/centos-6-ldap-nfs-file-ownership-is-stuck-on-nobody. Here’s a quick rundown:

Edit the /etc/idmapd.conf file and uncomment the line

#Domain = local.domain.edu

and replace it with the same value as in the NFS server, usually localdomain. You might want to change it to your actual network domain.

I rebooted after that, but it wasn’t enough to make it work since the information is cache. Run the following command to flush the cache.

nfsidmap -c

And that’s it! The groups should now be working.

UPDATE:

Unfortunately I encountered a problem. When doing a chown from the client, it changed the owner to nobody instead of the given user. I was unable to solve this, so I reverted to NFS3. I did it by mounting the shares in /etc/fstab by using the full export path and including the parameter vers=3 in the parameters.

You can check the full export paths from the client with the following command:

shownmount -e servername_or_ip

Hopefully I’ll be able to solve it soon.

Posted in Unix.


LDAP authentication with CentOS 6 (SSSD)

I began moving from CentOS 5 to CentOS 6, 6.4 to be exact. I ran into a problem early on when trying to configure user authentication with our LDAP server. The problem was that I started configuring it like I did on CentOS 5 using pam and the /etc/pam_ldap.conf -file when the new installation actually is using a new type of module called SSSD. I then switched to it by configuring my settings in /etc/sssd/sssd.conf and making the appropriate changes to /etc/nsswitch.conf. Namely:

passwd: files sss
shadow: files sss
group: files sss

Hovewer, I could not get the users’ secondary groups to work. I tried everything I could think of, but I just wasn’t getting them. It took me a clean install and comparing of configuration files to notice that my pam-configurations (maybe ‘authconfig-tui’, I’m not sure) had added a line to nsswitch.conf and only the presence of that single line was what was keeping SSSD from working. That single line is:

initgroups: files [SUCCESS=continue] sss

Make sure you don’t have that line in there and it should work. I have memberUid as the group member attribute, so my working SSSD configuration was this:

[domain/default]
ldap_id_use_start_tls = True
cache_credentials = True
ldap_search_base = dc=mydomain,dc=com
krb5_realm = EXAMPLE.COM
krb5_server = kerberos.example.com
id_provider = ldap
auth_provider = ldap
chpass_provider = ldap
ldap_uri = ldap://the.ip/
ldap_tls_cacertdir = /etc/openldap/cacerts
ldap_tls_reqcert = never
 
[sssd]
 
services = nss, pam
config_file_version = 2
 
domains = default
[nss]
 
[pam]
 
[sudo]
 
[autofs]
 
[ssh]
 
[pac]

 

Posted in Unix.

Tagged with , , .


7zip missing move argument script

I needed to archive historical files in order to free up much needed disk space. This included a lot of Excel files which compress very well and can save a lot of space.

I was going to use the standard zip utility provided by my CentOS installation, but my colleague suggested I try 7zip instead. And sure enough the compression rate was way better. In some cases even 2x compared to zip.

My goal was to compress the files in a specified directory individually in their original location while retaining the original directory structure and then removing the original file. As far as the removal is concerned, the zip utility has a handy –move argument, which “moves the file into the archive” and thus removes the original. To my surprise 7zip did not have that functionality. The latest alpha desktop version seems to have it, but not the command line version. This meant I had to implement it myself in a bash script. While I was at it, I wrote a little extra logic and success/error logging.

Here’s the script:

Features:

  • Confirmation with the absolute path for security
  • Arguments for controlling runtime and verbosity

Todo/Problems

  • Silent mode (no output) not working as far as the 7zip command is concerned

#!/bin/sh
#
# SCRIPT FOR COMPRESSING FILES AND REMOVING ORIGINALS RECURSIVELY WITH 7ZIP
# Written by Kurt Martinsen on 2013/05/29
#
# At the time of writing, there was no --move argument in 7zip, which would remove the original file after successful compressing.
# Most of this can be replaced when such an option is available.
#
# This script does the following: 1. compresses files recursively 2. tests the archives 3. removes the original files.
# Retains file permissions and ownership for the achive, but depending on 7zip, might not do the same inside the archive.
# DOES NOT COMPRESS AND REMOVE COMPLETE DIRECTORIES, just the files inside them one by one.
# DOES NOT COMPRESS ARCHIVE FILES! You can comment out "if [[ $file =~ ^.*\.+(7z|zip|bz2|gz)$ ]]; then" and it will.
#
 
# This is called on control-c so we can break from the while loop that is fed with the find command
function breakProgram {
  echo "User issued break signal. Exiting with status 130..."
  exit 130
}
# Catch control-c and call the break function
trap breakProgram SIGINT SIGTERM
 
# USAGE
function usage {
cat << EOF
  usage: $0 [OPTIONS] directory (options must preceed directory)
 
  SCRIPT FOR COMPRESSING FILES AND REMOVING ORIGINALS RECURSIVELY WITH 7ZIP
 
  OPTIONS:
     -h      Show this message
     -p      No prompts
     -s      Silent mode, output only totals of successes and failures
     -d      Don't remove files after compression
 
EOF
}
 
# START ARGUMENT CHECKS
 
# Argument variables
VERBOSE=true
PROMPT=true
DELETE_ON_SUCCESS=true
 
while getopts ?hpsd? OPTION
do
  case $OPTION in
    h)
      usage
      exit 1
      ;;
    p)
      PROMPT=false
      ;;
    s)
      VERBOSE=false
      ;;
    d)
      DELETE_ON_SUCCESS=false
      ;;
    ?)
      usage
      exit
      ;;
  esac
done
 
# This will allow us to get the directory argument
shift $(( OPTIND-1 ))
 
# END ARGUMENT CHECKS
 
# START VARIABLES
 
# Current working directory
WORKING_DIR="`pwd`"
# User supplied directory
TARGET_PARAM="$1"
# Check for absolute path (this is used in the confirmation dialog)
if [[ $TARGET_PARAM == /* ]]; then
  TARGET_DIRECTORY=$TARGET_PARAM
else
  TARGET_DIRECTORY="$WORKING_DIR/"$TARGET_PARAM
fi
 
# Additional safety for deleting files
PROMPT_ON_DELETE=false
 
# Counters for success and failure + failure log
SUCCESS_COUNTER=0
FAILURE_COUNTER=0
FAILURE_LOG=""
 
# 7zip program arguments
PROGRAM_7ZIP="7z a"
#PROGRAM_ARGUMENTS="-xr!*.7z -xr!*.bz2 -xr!*.gz -xr!*.zip"
PROGRAM_7ZIP_TEST="7z t"
if [ $PROMPT == false ]; then
  PROGRAM_ARGUMENTS_EXTRA="-y"
fi
 
# END VARIABLES
 
# START FUNCTIONS
 
# Actual archiving and removal function
function sevenZipIt {
  if [ $VERBOSE == false ]; then
    $PROGRAM_7ZIP "$1.7z" "$1" $PROGRAM_ARGUMENTS_EXTRA 2>&1> /dev/null
  else
    $PROGRAM_7ZIP "$1.7z" "$1" $PROGRAM_ARGUMENTS_EXTRA
  fi
  STATUSCODE=$?
  if [ $STATUSCODE -eq 0 ]; then
    # Retain file attributes
    retainFileAttributes "$1" "$1.7z"
    # Test archive
    $PROGRAM_7ZIP_TEST "$1.7z" $PROGRAM_ARGUMENTS_EXTRA
    STATUSCODE=$?
    if [ $STATUSCODE -eq 0 ]; then
      if [ $DELETE_ON_SUCCESS == true ]; then
        # Remove the original file. Skip prompt so we don't have to answer everytime
        if [ $PROMPT_ON_DELETE == false ]; then
          rm -f "$1"
        else
          rm -i "$1"
        fi
        STATUSCODE=$?
        if [ $VERBOSE == true ]; then
          if [ $STATUSCODE -eq 0 ]; then
            echo "Deleted original file after successful testing of achive: $1"
          else
            echo "Did not delete original file, either user answered no or there was a problem: $1"
          fi
        fi
      fi
      SUCCESS_COUNTER=$(( $SUCCESS_COUNTER + 1 ))
    else
      if [ $VERBOSE == true ]; then
        echo "Testing archive failed, did not delete original file: $1"
      fi
      FAILURE_COUNTER=$(( $FAILURE_COUNTER + 1 ))
      FAILURE_LOG="$FAILURE_LOG\n$1"
    fi
  else
    if [ $VERBOSE == true ]; then
      echo "Archiving failed with status code $statuscode for file: $1"
    fi
    FAILURE_COUNTER=$(( $FAILURE_COUNTER + 1 ))
    FAILURE_LOG="$FAILURE_LOG\n$1"
  fi
}
 
# Retains permissions and ownership. First argument is the original file, second the new file
function retainFileAttributes {
  chmodparam=$( stat --format=%a "$1" )
  chownparam=$( stat --format=%u "$1" )":"$( stat --format=%g "$1" )
  chmod $chmodparam "$2"
  chown $chownparam "$2"
}
 
# END FUNCTIONS
 
# START PROGRAM LOGIC
 
if [ ! -d "$TARGET_DIRECTORY" ]; then
  echo "ERROR Target directory $TARGET_DIRECTORY does not exist! Exiting..."
  exit 1
fi
 
# Ask user for confirmation
if [ $PROMPT == true ]; then
  read -p "Are you sure you want to archive and remove original files from $TARGET_DIRECTORY? " -n 1 -r
  echo
else
  REPLY="y"
fi
 
if [[ $REPLY =~ ^[Yy]$ ]]; then
  # Extra security for delete
  if [ $PROMPT == true ]; then
    read -p "Do you want to be asked before each file removal? " -n 1 -r
    echo
  else
    REPLY="n"
  fi
  if [[ $REPLY =~ ^[Yy]$ ]]; then
    PROMPT_ON_DELETE=true
  fi
 
  # 
  # Go through all the files from the find commands output
  #
  while read file; do
    # Check for undesireable file types
    if [[ $file =~ ^.*\.+(7z|zip|bz2|gz)$ ]]; then
      if [ $VERBOSE == true ]; then
        echo "Skipping archive..."
      fi
    else
      # Call the archiving function
      sevenZipIt "$file"
    fi
  done <<< "`find -P \"$TARGET_DIRECTORY\" -type f -print`"
 
  echo
  echo "Successfully archived and removed $SUCCESS_COUNTER files."
  echo "Failed to archive and remove $FAILURE_COUNTER files."
 
  # Ask to see failure log
  if [ $FAILURE_COUNTER -gt 0 ]; then 
    if [ $VERBOSE == true ]; then
      if [ $PROMPT == true ]; then
        read -p "Do you want to view the log for failed files?" -n 1 -r
        echo
      else
        REPLY="y"
      fi
      if [[ $REPLY =~ ^[Yy]$ ]]; then
        echo
        echo -e "$FAILURE_LOG"
      fi
    fi
  fi
fi
 
# END PROGRAM LOGIC
 
exit 0

There’s a lot more that can be added, but it’s working for me as it is.

Posted in Unix.

Tagged with , , , .


Downloading RTMP streams

Recently a friend of mine, who’s a singer asked me if I could help her get some videos of her performances from a website before they remove them. I discovered that the site didn’t have .flv urls in the source code, but instead used the RTMP streaming protocol. I did some googling and found a blog post that mentioned a command line utility called rtmpdump, which can be used to download the stream http://www.meydad.com/2012/07/14/how-to-download-a-rtmp-stream-to-a-local-flv-file-using-rtmpdump-for-mac/. That got me on the right track, but I still didn’t have the url. I then found a very good video on YouTube, which got me even further http://www.youtube.com/watch?v=8PuUnQCS7DQ. I was almost successful after combining these tutorials, but the video ended up skipping and was broken. After a long google session I noticed that one project that uses rtmpdump is ffmpeg. After some exploration I found the right parameters and was successful in downloading the videos. The process goes like this:

  1. Start Wireshark and start capturing network traffic
  2. Start playing the video on website
  3. Stop network capture
  4. Analyze capture data and formulate rtmp url (by combining rtmp location and video filename)
  5. Use ffmpeg to download the stream from the url

Here’s how you can do it on a  Mac running Mountain Lion.

  • In order to install ffmpeg, you can install a package manager called Homebrew by issuing the following:
    ruby -e "$(curl -fsSL https://raw.github.com/mxcl/homebrew/go)"
  • Then issue the following commands:
    brew doctor
    brew install ffmpeg
  • In order to use Wireshark you have to have an X11 Window System installed. You can download and install the open-source XQuartz for Mac.
  • Download and install Wireshark from http://www.wireshark.org/.
  • Now you should have everything installed. Before you can open Wireshark, you have to have XQuartz running, so launch it now (it takes a while to open).
  • After the X11 terminal window has opened, launch Wireshark.
  • You can find good instruction for the Wireshark part in the YouTube video I mentioned earlier, but here’s a recap:
    • Select network interface (en0)
    • Click “Start”
    • Write “rtmpt” in the filter box
    • After the correct video has been playing for a few seconds stop the capture with Capture->Stop. You can now stop playing the video.
    • Sort by the Source column and find and select the entry where the source was your IP and where it says “play(..” in the info field. (There might be several, if the video had adds for example)
    • Right click and select “Follow TCP stream”
    • Click “Find” and write “rtmp”. This will tell you the root url (tcUrl) of the stream (in my case it continued onto the next row until the last /-character).
    • Then click “Find” again and search for “mp4”, which will give you the filename to append to the root url. You can also manually look for it, since it is close to the root url.
  • Now you can download the clip with the following command
    ffmpeg -i rtmp://rootUrl/filename.mp4 -c copy dump.mp4

And that’s it. Seems kind of difficult, but once you’ve done it once, it’s fast and easy.

Posted in Unix.

Tagged with , , , , , , .


Getting to know Groovy

The Groovy JVM scripting language has been around for many years now, but I never really had much interrest in testing it. I finally read a bit more about it and watched a presentation. I wanted to test it out by myself by parsing a table on a HTML page and printing the output. The amount of code required was very low and the syntax was somewhat familiar from Java. I used Groovy/Grails Tool Suite as my IDE, since it had better code completion than MyEclipse 10.7.1.

Here’s the “final” code for the test

@Grab(group='org.ccil.cowan.tagsoup', module='tagsoup', version='1.2' )
def tagsoupParser = new org.ccil.cowan.tagsoup.Parser()
def slurper = new XmlSlurper(tagsoupParser)
def htmlParser = slurper.parse("data.html")
def myTable = htmlParser.'**'.find{ it.@class == 'my_div_class'}.'**'.find{ it.@class == 'my_table_class' }
myTable.tr.eachWithIndex{ row, index -> 
    println "${row.td[0]} ${row.td[3]} ${row.td[2]}"
}

On line 1-3 we grab the package needed to parse HTML which can have missing end tags etc. and we create a parser. On 4 we load the HTML file and parse it. On line 5 we extract the table element we are looking for by searching for an element that has the class “my_div_class” and inside that, the table with the class “my_table_class”. On line 6 we loop all the rows in the table and for each row we give a closure which on line 7 prints the first, fourth and third cells in that order. And that’s it!

Here’s a sample of the same code in Java

import java.io.File;
import java.io.IOException;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
import org.jsoup.nodes.Element;
import org.jsoup.select.Elements;
 
public class HtmlParser {
    public static void main(String[] args) {
        File htmlFile = new File("src/main/java/ama/test/mavenstuff/data.html");
        try {
            Document doc = Jsoup.parse(htmlFile, null);
            Element tableElement = doc.getElementsByClass("module").get(0).getElementsByClass("table_stockexchange").get(0);
            Elements tableRows = tableElement.select("tr");
            for (int i = 0; i < tableRows.size(); i++) {
                System.out.println(tableRows.get(i).select("td").get(0).text()
                    + " " + tableRows.get(i).select("td").get(3).text()
                    + " " + tableRows.get(i).select("td").get(2).text()
                );
            }
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
}

Posted in Java.

Tagged with , .


Using the facebook API with Java

I had previously made a module for a PHP website that cross-posts all posts to a facebook Page, so I knew a little about the facebook API and the whole ecosystem. This time I wanted to use Java, since it’s currently my primary programming language. I didn’t want to reinvent the wheel, so I started looking around for a Java API. Facebook doesn’t maintain such an API anymore, but I found two that could work.

http://code.google.com/p/facebook-java-api/ was the first one I tried, but it seemed to employ the old api key/secret key/session key approach which seems to have been replaced by a token key approach by facebook. This prompted me to test the other option which was http://restfb.com/. It seemed like a clean OO like solution, so I started to experiment with it.

My idea was to query facebook to see which of my friends were online. To accomplice that I registered a pseudo application on facebook’s developer site https://developers.facebook.com/apps and used the Graph API Explorer https://developers.facebook.com/tools/explorer/ to get a valid access token for my app that had the required access rights to query what I needed. The required rights were user_online_presence, friends_online_presence and offline_access. (offline_access may be deprecated in the near future and thus the access token may have to be refreshed)

I decided to make a simple command line runnable JAR instead of a web application, but at the same time create code that could be used in a web application later on. For the arguments I chose Apache Commons CLI, which creates usage print outs from arguments automatically. For logging I chose Log4J, but incorporated SLF4J to get logging from RestFB into my own logs.

Basically all that was needed to get the data from facebook was to create an instance of DefaultFacebookClient and query using facebook’s own FQL language, which is similar to SQL. Since RestFB is very OO I needed to create an object to hold the user’s uid, name and online presence. I started by querying all the rows, even the ones that were offline, but soon changed it to query only the active and idle statuses. The queries involved were:

Friends:

SELECT uid, name, online_presence FROM user WHERE online_presence IN ('active', 'idle') AND uid IN (SELECT uid2 FROM friend WHERE uid1 = me())

Myself:
SELECT uid, name, online_presence FROM user WHERE online_presence IN ('active', 'idle') AND uid = me()

I did create cool stuff around that, but this is all you need to know. 🙂

Posted in Java.

Tagged with , , .


Multiple HTTPS domains and sub-domains on a single server using a wildcard certificate

Since HTTPS doesn’t know the domain name before sending a certificate, it’s common that when using multiple domains dedicated IP addresses are used for each virtual host. There’s nothing wrong with that, but at least some ISP’s require you to register a whole address space after 5 registered IP’s.

I had a scenario where there were three different domains and for them three different virtual hosts. There was a requirement to add new virtual hosts that were sub-domains for one of the previous domains and these were also to be secured. Seems like it couldn’t be done, but there is solution.

What you need is a wildcard certificate that is specified in the first virtual host entry with the IP and domain name of that host. The other sub-domains have to be the next ones before the other domains. From those sub-domains you can exclude the SSL specific configuration, because these are inherited from the “main” virtual host. After those ones you may configure the other domains that use their own IP’s.

Here’s the relevant Apache configuration:

#
# *.MAINDOMAIN.COM
#
<VirtualHost 192.168.0.1:443>
ServerName www.maindomain.com
# General setup for the virtual host at this IP
SSLEngine on
SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL
SSLCertificateFile "/etc/ssl/certs/certificate.crt"
SSLCertificateKeyFile "/etc/ssl/private/certificate.key"
SSLCertificateChainFile "/etc/ssl/certs/CA_bundle.pem"
<Location />
Order allow,deny
Allow from all
</Location>
ErrorLog "/var/log/apache2/error_log"
TransferLog "/var/log/apache2/access_log"
...Configure here
</VirtualHost>
# SUB2.MAINDOMAIN.COM
<VirtualHost 192.168.0.1:443>
ServerName sub2.maindomain.com
ServerAlias www.sub2.maindomain.com
<Location />
Order allow,deny
Allow from all
</Location>
ErrorLog "/var/log/apache2/error_log"
TransferLog "/var/log/apache2/access_log"
...Configure here
</VirtualHost>
# SUB3.MAINDOMAIN.COM
<VirtualHost 192.168.0.1:443>
ServerName sub3.maindomain.com
ServerAlias www.sub3.maindomain.com
<Location />
Order allow,deny
Allow from all
</Location>
ErrorLog "/var/log/apache2/error_log"
TransferLog "/var/log/apache2/access_log"
...Configure here
</VirtualHost>
#
# END *.MAINDOMAIN.COM
#
# 2NDDOMAIN.COM
<VirtualHost 192.168.0.2:443>
ServerName www.2nddomain.com
SSLEngine on
SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL
SSLCertificateFile "/etc/ssl/certs/certificate3.crt"
SSLCertificateKeyFile "/etc/ssl/private/certificate3.key"
<Location />
Order allow,deny
Allow from all
</Location>
ErrorLog "/var/log/apache2/error_log"
TransferLog "/var/log/apache2/access_log"
...Configure here
</VirtualHost>
# 3RDDOMAIN.COM
<VirtualHost 192.168.0.3:443>
ServerName www.3rddomain.com
SSLEngine on
SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL
SSLCertificateFile "/etc/ssl/certs/certificate3.crt"
SSLCertificateKeyFile "/etc/ssl/private/certificate3.key"
<Location />
Order allow,deny
Allow from all
</Location>
ErrorLog "/var/log/apache2/error_log"
TransferLog "/var/log/apache2/access_log"
...Configure here
</VirtualHost>

Posted in Apache.

Tagged with , , .