Create a keytab file on Windows and use Java and Kerberos to connect to SQL SERVER

I currently work in a mixed environment containing box Linux and Windows computers. This can make authentication at times challenging. Below is an example java program which allows you to connect using kerberos to a SQL SERVER from a Windows or Linux client. The process involves creating a keytab file and a java login context file. This keytab file can be used to authenticate to windows resources like SQL SERVER and file servers using Java. The keytab file stores your username and password in an encrypted format.

STEP 1. Create a keytab file


cd "C:\Program Files\Java\jdk1.8.0_31\bin"
ktab.exe -a [email protected] P@ssword -k user01.keytab

STEP 2. You reference the keytab in your java login conf file.

java-login {
 com.sun.security.auth.module.Krb5LoginModule required
 useTicketCache=false
 doNotPrompt=true
 useKeyTab=true
 debug=true
 keyTab="c:/eclipse/user01.keytab"
 principal="[email protected]"
 };

STEP 3. The Java code then references the java login configuration file, then you make the SQL SERVER connection using the subject [email protected].

import java.security.PrivilegedExceptionAction;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.Statement;
import javax.security.auth.Subject;
import javax.security.auth.login.LoginContext;
import javax.security.auth.login.LoginException;


public class main {


        public static void main(String[] args) {

                System.setProperty("java.security.auth.login.config", "c:/eclipse/java-login.conf");

                Subject subject = null;



                try {
                    LoginContext loginContext = new LoginContext("java-login");
                    loginContext.login();
                    subject = loginContext.getSubject();

                }
                catch (LoginException e)
                {
                    e.printStackTrace();
                }


        //  This application passes the javax.security.auth.Subject
        //  to the driver by executing the driver code as the subject
try {
                @SuppressWarnings("unchecked")
                Connection con = (Connection) Subject.doAs(subject, new PrivilegedExceptionAction() {

                                    public Object run() {

                                        Connection con = null;
                                    try {
                                         //
                                         Class.forName("com.microsoft.sqlserver.jdbc.SQLServerDriver");
                                         String url = "jdbc:sqlserver://SQLSERVER.lab.net;instanceName=sqlinstance01;database=Accounting;integratedSecurity=true;authenticationScheme=JavaKerberos";
                                         con = java.sql.DriverManager.getConnection(url);
                                        }
                                     catch (Exception except) {
                                                except.printStackTrace();
                                     //log the connection error
                                           return null;
                                        }

                                        return con;
                                    }
                });


                String SQL = "select * from dbo.table01";
                Statement stmt = con.createStatement();
                ResultSet rs = stmt.executeQuery(SQL);

                while (rs.next())
                {
                        System.out.println(rs.getString(1));
                }
}
catch (Exception e)     {
        e.printStackTrace();
}



        } // end of method main



} // end of class main

SolarWinds SAM Appinsight for SQL SERVER causing deadlocks

I recently found a bug in the way Solar winds monitor’s our databases. I am running Solarwinds Server application Monitor 11.1 and SQL SERVER Application Insight. Ever since I turned on the Application Insight feature I started seeing deadlock’s occurring in my SQL SERVER environment. Upon further investigation I noticed the Solarwinds Application Insight for SQL SERVER monitoring application was doing a read committed operation on system tables in all my database.

I sent the email below to Solar winds support and suggested the Solar winds developer change the query to a read uncommitted. Several days later I heard back from the Solar winds application engineer stating that they had a script for me to run. I ran the script which updated the monitoring query to be read uncommitted. I saw an immediate drop in the number of deadlocks alerts coming from my SQL SERVER. The moral of the story is to test monitoring thoroughly before using it in production. As in this case the monitoring server was at times causing transactions to be terminated unexpectedly.


—–Original Message—–
From: [email protected]
Sent: Monday , January 12, 2015 12:31 pm GMT (GMT+00:00)
Subject: appinsight for sql is causing deadlocks

The issue I am seeing is that solarwinds is locking system tables when its
app insight sql monitor runs. Here is one example where app insight
deadlocked on tempdb.sys.sysidxstats. Might I suggest that solarwinds
change the software so it does a “dirty read” or in other words read
uncommitted. This would prevent the deadlocks from happening.

Here is a excerpt from the deadlock XML file. The SAM App Insight monitor is running in isolation level read committed.

<process id="process7481048" taskpriority="0" logused="0" waitresource="KEY: 2:281474980249600 (45001d8e01fc)" waittime="1219" ownerI
d="825842724" transactionname="SELECT" lasttranstarted="2015-01-11T13:36:28.200" XDES="0x80039960" lockMode="S" schedulerid="6" kpid="2680" status="suspended" spid="143" sbid="0" ecid="0" priority="0" trancount="0" lastbatchstarted="2015-01-11T13:36:28.133
" lastbatchcompleted="2015-01-11T13:36:28.090" clientapp=".Net SqlClient Data Provider" hostname="SW01" hostpid="16352" loginname="DOMAIN\sa-monitor" isolationlevel="read committed (2)" xactid="825842724" currentdb="2" lockTimeout="4294967295" cliento
ption1="671088672" clientoption2="128056">
    <executionStack>
     <frame procname="adhoc" line="1" stmtstart="26" sqlhandle="0x02000000b680b70233f6dc8a15fa42960c50200f89756a25">
SELECT TOP 10
	t.NAME AS TableName,
	t.object_id,
	mainIndex.data
_space_id,
	p.rows AS RowCounts,
	SUM(a.total_pages) * 8 AS TotalSpaceKB,
	SUM(CASE i.[type]
	WHEN 2 THEN 8 * a.used_pages
	ELSE 0
	END) AS IndexSpaceKB
FROM [sys].[tables] t
INNER JOIN sys.indexes mainIndex ON (t.object_id = mainIndex.object_id AND mainIn
dex.[type] IN (0,1))
INNER JOIN sys.indexes i ON t.OBJECT_ID = i.object_id
INNER JOIN [sys].[partitions] p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
INNER JOIN [sys].[allocation_units] a ON p.partition_id = a.container_id
WHERE t.NAME NOT LI
KE 'dt%' AND t.is_ms_shipped = 0 AND i.OBJECT_ID &gt; 255 GROUP BY t.Name, t.object_id, mainIndex.data_space_id, p.Rows
ORDER BY TotalSpaceKB DESC     </frame>
     <frame procname="adhoc" line="21" stmtstart="1748" stmtend="1786" sqlhandle="0x0200000
0db583517807077e95c2490bb681fca4f9e78724f">
EXECUTE (@sqlQuery)     </frame>
     <frame procname="unknown" line="1" sqlhandle="0x000000000000000000000000000000000000000000000000">
unknown     </frame>
    </executionStack>
    <inputbuf>


<keylock hobtid="281474980249600" dbid="2" objectname="tempdb.sys.sysidxstats" indexname="clst" id="lock30e2f1d00" mode="X" associatedObjectId="
281474980249600">
    <owner-list>
     <owner id="process39d30c088" mode="X"/>
    </owner-list>
    <waiter-list>
     <waiter id="process7481048" mode="S" requestType="wait"/>
    </waiter-list>
   </keylock>

Solarwinds Support provided the following SQL to change the ISOLATION LEVEL to READ UNCOMMITTED. Since included the new SQL in Solarwinds I have not seen any deadlocks.

-- FB395905 AppInsight for SQL is causing SQL Deadlocks
-- specifying isolation level for polling
UPDATE cts SET Value = 'SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED

DECLARE @TimeStamp datetime
SET @TimeStamp = GETUTCDATE()
SELECT
	name as dbName,
	d.database_id,
	[compatibility_level],
	collation_name,
	[state],
	recovery_model,
	DATEADD(second, DATEDIFF(second, GETDATE(), GETUTCDATE()), bk.last_backup) as last_backup,
	avg_read_latency,
	avg_write_latency,
	avg_bytes_per_read,
	avg_bytes_per_write,
	num_of_reads,
	num_of_writes,
	@TimeStamp as TimeStamp
FROM sys.databases d
LEFT JOIN
(
SELECT bs.database_name, MAX(bs.backup_finish_date) last_backup
FROM msdb.dbo.backupset bs
WHERE server_name = SERVERPROPERTY(''ServerName'') GROUP BY bs.database_name
) AS bk ON d.name = bk.database_name
LEFT JOIN
(
SELECT  database_id,
	SUM(io_stall_read_ms / NULLIF(num_of_reads, 0)) AS avg_read_latency,
	SUM(io_stall_write_ms / NULLIF(num_of_writes, 0)) AS avg_write_latency,
	SUM(num_of_bytes_read / NULLIF(num_of_reads, 0)) AS avg_bytes_per_read,
	SUM(num_of_bytes_written / NULLIF(num_of_writes, 0)) AS avg_bytes_per_write,
	SUM(num_of_reads) AS num_of_reads,
	SUM(num_of_writes) AS num_of_writes
FROM sys.dm_io_virtual_file_stats(NULL, NULL)
GROUP BY database_id
)AS [io] ON d.database_id = [io].database_id
WHERE name NOT IN (${UnmanagedDatabases})'
FROM APM_ComponentTemplateSetting cts
INNER JOIN APM_ComponentTemplate ct ON cts.ComponentTemplateID = ct.ID
WHERE ct.UniqueId = '15868507-6248-4DE6-9A83-561535EBC058' AND cts.[Key] = 'SqlQuery'

-- Enforce job rescheduling for ABSA applications
UPDATE a SET LastModified = GETUTCDATE()
FROM APM_Application a INNER JOIN APM_ApplicationTemplate at ON a.TemplateID = at.ID
WHERE at.CustomApplicationType = 'ABSA'

SQL Server deadlock logging with email alerts

This configuration was setup on Windows 2012 and SQL SERVER 2012, it was also tested on SQL SERVER 2008.

Create a table named DeadlockEvents, this will be used to store the date and time stamp of the deadlock event as well as the XML generated from the deadlock event.

STEP 1.
Create a table to hold the SQL SERVER deadlocks. One field will contain the time at which the deadlock occurred and the other the XML from the deadlock.


/** 
Create Table to hold deadlocks
**/

USE DBA ;
GO

IF OBJECT_ID('DeadlockEvents', 'U') IS NOT NULL
BEGIN
    DROP TABLE DeadlockEvents ;
END ;
GO

CREATE TABLE DeadlockEvents
    (AlertTime DATETIME, DeadlockGraph XML) ;
GO

STEP 2.
Add a SQL Agent job to trigger an insert of a deadlock record when a deadlock event occurs.

The following fields will need to be populated with your environment specific information.
@owner_login_name=N’DOMAIN\sa-job-owner’


/** Add JOB**/

USE [msdb]
GO

/****** Object:  Job [DBM2_Capture_Deadlock_Graph]    Script Date: 11/13/2014 13:54:07 ******/
BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0
/****** Object:  JobCategory [[Uncategorized (Local)]]]    Script Date: 11/13/2014 13:54:07 ******/
IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'[Uncategorized (Local)]' AND category_class=1)
BEGIN
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'[Uncategorized (Local)]'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback

END

DECLARE @jobId BINARY(16)
EXEC @ReturnCode =  msdb.dbo.sp_add_job @job_name=N'DBM2_Capture_Deadlock_Graph', 
		@enabled=1, 
		@notify_level_eventlog=2, 
		@notify_level_email=0, 
		@notify_level_netsend=0, 
		@notify_level_page=0, 
		@delete_level=0, 
		@description=N'Job for responding to DEADLOCK_GRAPH events', 
		@category_name=N'[Uncategorized (Local)]', 
		@owner_login_name=N'DOMAIN\sa-job-owner', @job_id = @jobId OUTPUT
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
/****** Object:  Step [Insert graph into LogEvents]    Script Date: 11/13/2014 13:54:08 ******/
EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'Insert graph into LogEvents', 
		@step_id=1, 
		@cmdexec_success_code=0, 
		@on_success_action=3, 
		@on_success_step_id=0, 
		@on_fail_action=2, 
		@on_fail_step_id=0, 
		@retry_attempts=0, 
		@retry_interval=0, 
		@os_run_priority=0, @subsystem=N'TSQL', 
		@command=N'INSERT INTO DeadlockEvents
                (AlertTime, DeadlockGraph)
                VALUES (getdate(), N''$(ESCAPE_SQUOTE(WMI(TextData)))'')', 
		@database_name=N'DBA', 
		@flags=0
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
/****** Object:  Step [send email]    Script Date: 11/13/2014 13:54:08 ******/
EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @job_id=@jobId, @step_name=N'send email', 
		@step_id=2, 
		@cmdexec_success_code=0, 
		@on_success_action=1, 
		@on_success_step_id=0, 
		@on_fail_action=2, 
		@on_fail_step_id=0, 
		@retry_attempts=0, 
		@retry_interval=0, 
		@os_run_priority=0, @subsystem=N'TSQL', 
		@command=N'exec sp_deadlock_alert', 
		@database_name=N'DBA', 
		@flags=0
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
COMMIT TRANSACTION
GOTO EndSave
QuitWithRollback:
    IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:

GO

STEP 3.
This stored procedure is run when the SQL Agent job is triggered. I sends a copy of the deadlock XML to the email address specified as the recipient.


/** Add stored proc **/

use DBA;
GO
create procedure sp_deadlock_alert
AS

DECLARE @profile varchar(30)
set @profile = (select top 1 name from msdb.dbo.sysmail_profile)

EXEC msdb.dbo.sp_send_dbmail
    @profile_name = @profile,
    @recipients = '[email protected]',
    @body = 'DeadLock Graph',
    @subject = 'A deadlock has occurred, please review the provided attachment...',
    @body_format = 'TEXT',
    @query_attachment_filename = 'Deadlockgraph.xml',
    @attach_query_result_as_file = 1,
    @query_no_truncate = 1,
     --@query_result_width = 512,
     --@query_result_no_padding = 1,
    @query = 'SELECT TOP 1 [DeadlockGraph]
  FROM [DBA].[dbo].[DeadlockEvents]
  order by alerttime desc
    for xml PATH(''ROW''), root(''ROOT''), TYPE';
    

STEP 4.
Add alert to fire SQL Agent job when a deadlock event occurs.
The job named DBM2_Capture_Deadlock_Graph will file whenever a deadlock event occurs.
For this to work you token replacement enabled on the SQL AGENT.
Right-click SQL Server Agent in Object Explorer, select Properties, and on the Alert System page, select Replace tokens for all job responses to alerts to enable these tokens.

/** Add Alert **/
USE [msdb]
GO
EXEC msdb.dbo.sp_add_alert @name=N'Respond to DEADLOCK_GRAPH', 
@wmi_namespace=N'\\.\root\Microsoft\SqlServer\ServerEvents\MSSQLSERVER', 
    @wmi_query=N'SELECT * FROM DEADLOCK_GRAPH', 
    @job_name='DBM2_Capture_Deadlock_Graph' ;
GO

STEP 5.
Testing the deadlock alerting

Open a SSMS query window and execute the following SQL.

use dba;
BEGIN TRAN
update table_a set ID=ID where ID = 100;

update table_b set ID=ID where ID =100;

Open a second SSMS query window and execute the following SQL.
This should result in a deadlock which inserts a record into the DeadlockEvents table and sends an email alerting you of the deadlock.

use dba;
BEGIN TRAN
update table_b set ID=ID where ID =100;

update table_a set ID=ID where ID = 100;

Connecting to MSSQL SERVER using Microsoft Java ODBC driver and Kerberos

I did not find a lot of documentation for connecting to SQL SERVER using Kerberos authentication and JAVA. So I decided to write this little blog post up.

The key piece of information is using authenticationScheme=JavaKerberos in the connection string.
This works on both Windows and Linux Operating System as long as you have Kerberos ticket. You can verify if you have a Kerberos ticket by typing klist on the command line.
You should see something like the following for your user.

[user1@vm01 ~]$ klist
Ticket cache: FILE:/tmp/krb5cc_16777216_kbQnZ2
Default principal: [email protected]
 
Valid starting     Expires            Service principal
10/22/14 07:23:58  10/22/14 17:23:58  krbtgt/[email protected]
    renew until 10/29/14 07:23:58
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.Properties;

import com.microsoft.sqlserver.*;

public class main {
	public static void main(String[] args) {
		try {
			Class.forName("com.microsoft.sqlserver.jdbc.SQLServerDriver");
		} catch (ClassNotFoundException e) {
			// TODO Auto-generated catch block
			e.printStackTrace();
		}
		
		String connectionUrl = "jdbc:sqlserver://SQLSERVER01;instanceName=SQLINSTANCE01;database=Inventory;integratedSecurity=true;authenticationScheme=JavaKerberos";
		
		try {
			Connection con = java.sql.DriverManager.getConnection(connectionUrl);
			System.out.println("connected...");
			
			String SQL = "select * from dbo.table01";
			Statement stmt = con.createStatement();
			ResultSet rs = stmt.executeQuery(SQL);
			

			while (rs.next())
			{
				System.out.println(rs.getString(1));
			}
			
		} catch (SQLException e) {
			// TODO Auto-generated catch block
			e.printStackTrace();
		}
	}
}

Rebuild MySQL Master Master Replication after ERROR: Got fatal error 1236 from master when reading data from binary log

Rebuild MySQL Master Master Replication after receiving the error below. This procedure can also be used to setup Mysql Master / Master replication.

Possible error in the mysql error log:

Got fatal error 1236 from master when reading data from binary log: 
'Found old binary log without GTIDs while looking for the oldest 
binary log that contains any GTID that is not in the given 
gtid set', Error_code: 1236

This post is about the following configuration. A pair of MySQL servers running CENTOS 6.5 and MySQL 5.6.
The MySQL servers are running salves to one another Multi-Master. The SQL clients write to a Virtual IP which is configured to float between the MASTER and SLAVE in the event of a failover.
Reference Architecture:
MYSQLSERVER01 – Primary Master
MYSQLSERVER02 – Backup Master
Keepalived – Virtual IP

STEP 1: ON MYSQLSERVER02
Log into MySQL Workbench and execute the following commands

STOP SLAVE;
RESET SLAVE;
RESET MASTER;

STEP 2: ON MYSQLSERVER01
Log into MySQL Workbench and execute the following commands

STOP SLAVE;
RESET SLAVE;
RESET MASTER;

STEP 3: ON MYSQLSERVER01 take a backup with mysqldump from the master:
SSH into the server and execute the following

mysqldump --all-databases --single-transaction --triggers --routines --events --user=root -p > /tmp/dump.sql

STEP 4: ON MYSQLSERVER01 transfer the mysqldump backup file form MYSQLSERVER01 to MYSQLSERVER02
SSH into the server and execute the following

scp /tmp/dump.sql root@MYSQLSERVER02:/tmp/dump.sql

STEP 5: ON MYSQLSERVER01 load the mysqldump file
SSH into the server and execute the following

mysql -u root -p  

STEP 6: ON MYSQLSERVER02

CHANGE MASTER TO MASTER_HOST='MYSQLSERVER01.lab.net', MASTER_USER='repl', MASTER_PASSWORD='replPassword', MASTER_AUTO_POSITION = 1;

STEP 7. ON MYSQLSERVER02
Verify the Slave_IO_Running and Slave_SQL_Running both have a status of YES

SHOW SLAVE STATUS;
start slave;
SHOW SLAVE STATUS;

STEP 8. ON MYSQLSERVER01
Log into MySQL Workbench and execute the following commands

CHANGE MASTER TO MASTER_HOST='MYSQLSERVER02.lab.net', MASTER_USER='repl', MASTER_PASSWORD='replPassword', MASTER_AUTO_POSITION = 1;

STEP 9. ON MYSQLSERVER01 start the slave and check the status of the slave operations
Log into MySQL Workbench and execute the following commands
Verify the Slave_IO_Running and Slave_SQL_Running both have a status of YES

SHOW SLAVE STATUS;
start slave;
SHOW SLAVE STATUS;

STEP 10. Additional testing can be done to confirm that the bi directional replication is working by inserting a record into the testdb.
Log into MySQL Workbench and execute the following commands
Perform insert test on MYSQLSERVER01, verify that the record has replicated to MYSQLSERVER02.

Configuring tsung load testing software

This document was written with the assumption that CentOS 6.5 will be used.

Host systems involved in this test.
tsung client system: client01.lab.net
web server being load tested: web01.lab.net

STEP 1. Add he following repositories to client01.lab.net, these are the Extra Packages for Enterprise Linux 6 repositories.

vim /etc/yum.repos.d/epel.repo
[epel]
name=Extra Packages for Enterprise Linux 6 - $basearch
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch
mirrorlist=http://mirrors.fedoraproject.org/metalink?repo=epel-6&amp;arch=$basearch
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6

[epel-debuginfo]
name=Extra Packages for Enterprise Linux 6 - $basearch - Debug
#baseurl=http://download.fedoraproject.org/pub/epel/6/$basearch/debug
mirrorlist=http://mirrors.fedoraproject.org/metalink?repo=epel-debug-6&amp;arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
gpgcheck=1

[epel-source]
name=Extra Packages for Enterprise Linux 6 - $basearch - Source
#baseurl=http://download.fedoraproject.org/pub/epel/6/SRPMS
mirrorlist=http://mirrors.fedoraproject.org/metalink?repo=epel-source-6&amp;arch=$basearch
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
gpgcheck=1

STEP 2. Install tsung load testing software

yum install tsung

STEP 3. Create a .tsung directory under the user’s home folder which you will launch tsung from.

su testuser
vim /home/testuser/.tsung/tsung.xml

STEP 4. Create the following XML file, by entering the xml below into the tsung.xml file
Update the system names in the SERVER and CLIENT tags. Also update the URL in the REQUEST tag to match your environment. Using this configuration I was able to fully utilize four IIS web servers in a load test.

<?xml version="1.0"?>
<!DOCTYPE tsung SYSTEM "/usr/share/tsung/tsung-1.0.dtd">
<tsung loglevel="notice" version="1.0">

<clients>
        <client host="client01.lab.net" weight="1" cpu="10" maxusers="40000">
                <ip value="172.16.1.20"/>
        </client>
</clients>

<servers>
      <server host="wauweb01.lab.net" port="443" type="ssl"/>
</servers>

  <load>
     <arrivalphase phase="1" duration="10" unit="minute">
        <users maxnumber="15000" arrivalrate="8" unit="second"/>
          </arrivalphase>

   <arrivalphase phase="2" duration="10" unit="minute">
      <users maxnumber="15000" arrivalrate="8" unit="second"/>
        </arrivalphase>

   <arrivalphase phase="3" duration="30" unit="minute">
      <users maxnumber="20000" arrivalrate="3" unit="second"/>
        </arrivalphase>

 </load>

 <sessions>
    <session probability="100" name="ab" type="ts_http">
           <for from="1" to="10000000" var="i">
                       <request> <http url="/webservice/test.svc?wsdl" method="GET" version="1.1"/> 
                       </request>
           </for>
     </session>
   </sessions>
</tsung>

STEP 5. On the client system start tsung, under the user which you configured the tsung.xml file

tsung start

Logging syslog messages to mysql using syslog-ng

The guide was built using CentOS 6.5, MySQL 5.6 Community Edition and Syslog-Ng. This guide assumes you already have MySQL up and running.

STEP 1. First we must download and install Extra Packages for Enterprise Linux (epel) repository for CentOS 6.

wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
sudo rpm -Uvh epel-release-6*.rpm

STEP 2. Install the following packages required for syslog-ng and mysql log destination.

yum install syslog-ng
yum install syslog-ng-libdbi
yum install libdbi-drivers
yum install libdbi-devel
yum install libdbi-dbd-mysql
yum install syslog-ng-libdbi

STEP 3. Create the following table in the syslog database in MySQL.

Create Table: CREATE TABLE `logs` (
  `host` varchar(32) DEFAULT NULL,
  `facility` varchar(10) DEFAULT NULL,
  `priority` varchar(10) DEFAULT NULL,
  `level` varchar(10) DEFAULT NULL,
  `tag` varchar(10) DEFAULT NULL,
  `datetime` datetime DEFAULT NULL,
  `program` varchar(15) DEFAULT NULL,
  `msg` text,
  `seq` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (`seq`),
  KEY `date_host` (`datetime`,`host`)
) ENGINE=InnoDB AUTO_INCREMENT=14120 DEFAULT CHARSET=utf8

STEP 4. Create a MySQL user which has DELETE, INSERT, SELECT, UPDATE on the syslog database / schema.
Create user and set password
MySQL user1
Give the user “syslog” access to write to the database.
MySQL user2

STEP 5. Begin editing the syslog-ng.conf file

vim /etc/syslog-ng/syslog-ng.conf

Sample syslog-ng configuration file to save logs to a MySQL destination.

options {
flush_lines (0);
time_reopen (10);
log_fifo_size (1000);
long_hostnames (off);
use_dns (yes);
use_fqdn (yes);
create_dirs (yes);
keep_hostname (yes);
};

# syslog-ng log source
source s_net { udp(ip("172.16.1.20") port(514)); };
perm(0644) dir_perm(0700) create_dirs(yes)); };


source s_sys {
file ("/proc/kmsg" program_override("kernel: "));
unix-stream ("/dev/log");
internal();
};

destination d_file { file("/var/log/messagestest"); };
perm(0644) dir_perm(0700) create_dirs(yes)); };
destination d_cons { file("/dev/console"); };
destination d_mesg { file("/var/log/messages"); };
destination d_auth { file("/var/log/secure"); };
destination d_mail { file("/var/log/maillog" flush_lines(10)); };
destination d_spol { file("/var/log/spooler"); };
destination d_boot { file("/var/log/boot.log"); };
destination d_cron { file("/var/log/cron"); };
destination d_kern { file("/var/log/kern"); };
destination d_mlal { usertty("*"); };

# MySQL define destination
destination d_mysql {
sql(
type(mysql)
username("syslog")
password("Pass123!")
database("syslog")
host("172.16.1.20")
table("logs")
columns("host", "facility", "priority", "level", "tag", "datetime", "program", "msg")
values("$HOST", "$FACILITY", "$PRIORITY", "$LEVEL", "$TAG","$YEAR-$MONTH-$DAY $HOUR:$MIN:$SEC","$PROGRAM", "$MSG")
indexes("datetime", "host")
);
};


filter f_kernel { facility(kern); };
filter f_default { level(info..emerg) and
not (facility(mail)
or facility(authpriv)
or facility(cron)); };
filter f_auth { facility(authpriv); };
filter f_mail { facility(mail); };
filter f_emergency { level(emerg); };
filter f_news { facility(uucp) or
(facility(news)
and level(crit..emerg)); };
filter f_boot { facility(local7); };
filter f_cron { facility(cron); };

# MySQL log to destination
log {source(s_net); destination(d_file); destination(d_mysql);};

# map source to destination fields.
log { source(s_sys); filter(f_kernel); destination(d_kern); };
log { source(s_sys); filter(f_default); destination(d_mesg); };
log { source(s_sys); filter(f_auth); destination(d_auth); };
log { source(s_sys); filter(f_mail); destination(d_mail); };
log { source(s_sys); filter(f_emergency); destination(d_mlal); };
log { source(s_sys); filter(f_news); destination(d_spol); };
log { source(s_sys); filter(f_boot); destination(d_boot); };
log { source(s_sys); filter(f_cron); destination(d_cron); };

STEP 6. Restart syslog-ng

service syslog-ng start

STEP 7. Log into MySQL and verify that logs are being saved to MySQL.

SELECT * 
FROM syslog.logs
order by datetime desc;

After completing these steps you should see the logs being saved to MySQL.
mysql-syslog-ng

How to setup Active Directory Authentication in MySQL running on Linux

This document was written using Microsoft Windows Active Directory 2012, Mysql 5.6 and CentOS 6.5. MySQL is running on CentOS 6.5. This document describes how to map an Active Directory Group to a MySQL User and authenticate against Active Directory. The completion of these steps allows Active Directory groups to be utilized within MySQL.

In this demonstration the following groups will be referenced.

mySQL_AD – This is the MySQL user which will be mapped/associated to the Active Directory Group.

AD_mysql_users – This is the Active Directory user which will be mapped to mySQL_AD.

STEP 1. Prior to getting started you need the following pam module installed.

/usr/lib64/mysql/plugin/authentication_pam.so

STEP 2. After installing the module update the my.cnf file, my.cnf is normally located in /etc/my.cnf

[mysqld]
plugin-load=authentication_pam.so

STEP 3. Restart mysql and verify that the PAM modules is installed.

service mysql restart
mysql -u root
select * from information_schema.PLUGINS\G

The following output should be returned from the PLUGIN query

*************************** 47. row ***************************
           PLUGIN_NAME: authentication_pam
        PLUGIN_VERSION: 1.0
         PLUGIN_STATUS: ACTIVE
           PLUGIN_TYPE: AUTHENTICATION
   PLUGIN_TYPE_VERSION: 1.0
        PLUGIN_LIBRARY: authentication_pam.so
PLUGIN_LIBRARY_VERSION: 1.4
         PLUGIN_AUTHOR: Georgi Kodinov
    PLUGIN_DESCRIPTION: PAM authentication plugin
        PLUGIN_LICENSE: PROPRIETARY
           LOAD_OPTION: ON
47 rows in set (0.00 sec)

STEP 4.
Create the following file: vim /etc/pam.d/mysql with the content below. This assumes you already have the MySQL Linux system joined to the Microsoft Active Directory Domain. The pam_winbind.so is a PAM module that can authenticate users against the local domain by talking to the Winbind daemon.

auth   required    pam_winbind.so
account required    pam_winbind.so

STEP 5. Create a MySQL user with a password. The password is not important because it will never be used.
mySQL_AD is the MySQL user which will be mapped to the Active Directory group.

CREATE USER 'mySQL_AD'@'%' IDENTIFIED BY 'SomePassw0rd';

STEP 6. Run the CREATE USER statement to add the MySQL user to Active Directory mapping. In this example the Active Directory group AD_mysql_users is being mapped to the MySQL user: mySQL_AD
The “mysql” in front of “AD_mysql_users=mySQL_AD” of should match the name of pam.d file e.g. (/etc/pamd./mysql)
AD_mysql_users=mySQL_AD

CREATE USER ''@''
IDENTIFIED WITH authentication_pam
AS 'mysql, AD_mysql_users=mySQL_AD';

STEP 7. Grant mySQL_AD PROXY access. This gives users in Active Directory the ability to impersonate the user: mySQL_AD. To run this command you must be logged in as MySQL root: (e.g. mysql -u root -p). Even if you are logged in with DBA access this step will not work, must be logged in under MySQL root user. I tired logging into MySQL with a user which had equivalent privileges to MySQL root and the GRANT failed. Only when I logged in with MySQL root did the GRANT succeed.

mysql -u root
GRANT PROXY ON 'mySQL_AD'@'%' TO ''@'';

STEP 8.
Do a SELECT on the table and see which MySQL users are mapped to Active Directory Groups

SELECT * FROM mysql.user;

Fig 1.
MySQL Workbench_2015-03-17_19-46-36

STEP 9.
Set the appropriate user access on the MySQL server for the MySQL user mySQL_AD. For example, add the appropriate permissions for the user to do a select on a MySQL database. Remember that mySQL_AD is the MySQL user which we created earlier and is mapped to the Active Directory Group AD_mysql_users.

STEP 10. Flush security privileges, this causes the user settings to become active

flush privileges;

You can then perform a test connection.
You may receive the message below when performing a test connection.

[root@MYSQL01 log]# mysql -u testuser -p
Enter password:
ERROR 2059 (HY000): Authentication plugin 'mysql_clear_password' cannot be loaded: plugin not enabled

Set the flag to allow cleartext.

[root@MYSQL01 log]# export LIBMYSQL_ENABLE_CLEARTEXT_PLUGIN=1

If you are connecting from MySQL workbench edit your connection, select “Advanced” then “Enable Clear text Authentication Plugin”. Log into MySQL using your Active Directory Username / Password. You can verify which AD user and MySQL user you are logged in as. In Fig 2 I am authenticated as user testuser mapped to the MySQL user mySQL_AD. Note if you enable clear text authentication plugin it is recommended that you connect to MySQL using SSL via workbench.

SELECT USER(),CURRENT_USER();

Fig 2
MySQL Workbench_2015-03-18_19-57-35

The configuration below were used in support of the above configurations.

Additional configuration information in /etc/samba/smb.conf

[global]

        workgroup = LAB
        server string = Samba Server Version %v
        realm   =       LAB.NET
        netbios name = MYSQL01
        encrypt passwords = yes
        password server = dc01.lab.net
        #idmap config * : backend = rid
        #idmap config * : range = 10000-20000
        idmap config * : range = 16777216-33554431
        winbind use default domain = Yes
        winbind enum users = Yes
        winbind enum groups = Yes
        winbind nested groups = Yes
        winbind separator = +
        winbind refresh tickets = yes
        winbind offline logon = true
        template shell = /bin/bash
        template homedir = /home/%D/%U
        preferred master = no
        dns proxy = no
        wins server = dc01.lab.net
        wins proxy = no
        inherit acls = Yes
        map acl inherit = Yes
        acl group control = yes
        security = ads
        passdb backend = tdbsam

Additional configuration information in /etc/krb5.conf

[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
 default_realm = LAB.NET
 dns_lookup_realm = true
 dns_lookup_kdc = true
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true
 proxiable = true

[realms]
 LAB.NET = {
  default_domain = LAB.NET
  kdc = DC01.LAB.NET
  admin_server = DC01.LAB.NET
 }

[domain_realm]
 .lab.net = LAB.NET
 lab.net = LAB.NET
 lab = LAB.NET

[appdefaults]
        pam = {
        ticket_lifetime         = 1d
        renew_lifetime          = 1d
        forwardable             = true
        proxiable               = true
        retain_after_close      = false
        minimum_uid             = 0
        debug                   = true
        }

Additional configuration information in /etc/nsswitch.conf

passwd:     files sss ldap winbind
shadow:     files sss ldap winbind
group:      files sss ldap winbind

Additional configuration information in /etc/pam.d/system-auth

auth        required      pam_env.so
auth        sufficient    pam_fprintd.so
auth        sufficient    pam_unix.so nullok try_first_pass
auth        sufficient    pam_krb5.so use_first_pass
auth        sufficient    pam_winbind.so cached_login use_first_pass
auth        requisite     pam_succeed_if.so uid &gt;= 500 quiet
auth        required      pam_deny.so

account     required      pam_unix.so
account     sufficient    pam_localuser.so
account     sufficient    pam_succeed_if.so uid &lt; 500 quiet
account     [default=bad success=ok user_unknown=ignore] pam_krb5.so
account     [default=bad success=ok user_unknown=ignore] pam_winbind.so cached_login
account     required      pam_permit.so

password    requisite     pam_cracklib.so try_first_pass retry=3 type=
password    sufficient    pam_unix.so sha512 shadow nullok try_first_pass use_authtok
password    sufficient    pam_krb5.so use_authtok
password    sufficient    pam_winbind.so cached_login use_authtok
password    required      pam_deny.so

session     optional      pam_keyinit.so revoke
session     required      pam_limits.so
session     optional      pam_mkhomedir.so skel=/etc/skel umask=0027
session     [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session     required      pam_unix.so
session     optional      pam_krb5.so

Reference : http://dev.mysql.com/doc/refman/5.6/en/proxy-users.html

Installing Ubuntu 14.04 on a Intel NUC

These are the instructions I used to install Ubuntu 14.04 on a Intel NUC BOXD54250WYKH1.
I was having difficulty in getting the Ubuntu installer to recognize my SATA 2.5″ Crucial_CT240M500SSD1 SSD. Every time the Ubuntu install would get to the disk partitioning screen it would crash. It turns out my partition table was formatted in MSDOS and is required to be in GPT. Some of the screen shots I generated using a Ubuntu VMware virtual machine, however the same steps apply to a physical machine. In Fig 1 you can see that the partition table type is msdos.

STEP 1. Boot to the Ubuntu Live CD

STEP 2. Verify that your partition table is not GPT, by running diskparted. Select the disk you will be installing Ubuntu on and select VIEW and Device Information.

sudo diskparted

Fig 1.
partition-table-msdos

STEP 3. Run gdisk utility

sudo gdisk /dev/sda

The gdisk utility states that it found an invalid GPT, and will convert the MBR to a GPT.
Fig 2.
gdisk

Below I create a new partition 2 that is 550MB in size and flagged as an EFI system partition.

Command (? for help): n
Partition number (2-128, default 2):
First sector 34-33554398, default = 1077248) or {+-}size{KMGTP}
Last sector (1077248-33554398, default = 3354398) or {+-}size{KGTP}: +550M
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300): ef00
Change type of partition to 'EFI System'
Command (? for help): w

Fig 3.
EFI-system-partition

STEP 4. Format the EFI system partition as FAT32

sudo mkfs.msdos -F 32 /dev/sda1

Fig 4.
gpt-device

After completing these steps I was able to restart the Ubuntu installer and install Ubuntu 14.04 on my NUC and Crucial SSD.

Testing network latency using TCP instead of ICMP

This configuration was built using CentOS 6.5. Sometimes I am unable to test network latency because ICMP is blocked. Under those circumstances Hping2, is useful because it uses TCP to measure network latency. However, one capability that hping lacks is a time stamp associated with each measurement. Using BASH scripting we are able to add a time stamp.

STEP 1. Install hping2

yum install hping2

STEP 2. create script which adds a time stamp to each measurement

while true; do \
    hping2 www.google.com -S -p 443 -i 1 -c 1 | grep "len=" \
    | sed -e "s/^/`date` /g"; \
    sleep 1; \
done

Script output

--- www.google.com hping statistic ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 15.9/15.9/15.9 ms
Mon Mar  9 07:08:15 CDT 2015 len=46 ip=216.58.216.228 ttl=50 id=50666 sport=443 flags=SA seq=0 win=42900 rtt=15.9 ms

--- www.google.com hping statistic ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 18.5/18.5/18.5 ms
Mon Mar  9 07:08:16 CDT 2015 len=46 ip=216.58.216.228 ttl=50 id=1590 sport=443 flags=SA seq=0 win=42900 rtt=18.5 ms