Tuesday, November 11, 2014

Git: How to clone a repository using a token on the command line

I was creating a vagrant development instance and I wanted clone from our private repository and this was something I intended to share. So, I didn't want anyone to have to log in and I didn't want to have manual steps. Since we already had a token for our group, I wanted to be able to use the token to clone and do git pulls on the development instance for all of our puppet configs. I found a solution on the web that led to the answer.


Create your token: Read up on how to create a token and create your token on github - it's easy. I used this link: https://help.github.com/articles/creating-an-access-token-for-command-line-use/

Then you can use your token to do anything it allows you. For example:

git clone --recursive https://TOKEN-GOES-HERE:x-oauth-basic@github.com/private-repo/puppet-configs.git /etc/puppet

The site that helped me was this one: 

http://rzrsharp.net/2013/07/02/private-github-repos-with-npm-and-heroku.html

Thursday, November 6, 2014

Git: How to delete a tag off the remote repo


If you have a tag named '12345' then you would just do this:
git tag -d 12345
git push origin :refs/tags/12345
That will remove '12345' from the remote repository.
Found that wonderful piece of help here:
http://nathanhoad.net/how-to-delete-a-remote-git-tag

Wednesday, October 22, 2014

Calling a jenkins job with a bash script

I don't care to use a mouse when I'm working on a project unless I have to. As nice as Jenkins is, if you have to re-run a job often, clicking through their UI can get old fast. I had trouble getting the API access working with a script, so I better write down what I did.

First things, you need to enable script triggers at the job level. Configure the job and under Build Triggers, select "Trigger builds remotely" and add an API key. See my earlier post for one or two options to generate a job token.

Next thing you need to do is set/get your personal API token from Jenkins. You could create another account if you choose to, but you will need an account that has access to execute the action you want to do e.g. have a script to run a job. Click on your login name and then configure > API Token > Show API Token

I took this from one of the jenkins posts but I don't remember which one. I use curl instead of wget and handle parameter jobs or parameterless jobs. There's more you could do to clean this up, but it works.



#!/bin/bash ########################################################################
# This is an example how to call a jenkins job using bash
#
# PreReqs:
# 1: Add a token to your jenkins job. Configure job,
# "Build Triggers - Trigger builds remotely (e.g. from scripts)"
# One way to generate a job token
#
# $> r="";for e in $(seq 1 5); do r="${r}$(od -vAn -N4 -tu4 < /dev/urandom|sed 's| ||g')"; done;echo $r|openssl sha1 -sha256
#
# 2: Go get your own personal token
# You can click on your name, 'Configure > API Token > Show API Token
#
# 3: Run your job e.g. run one that has parameters
# ./run-jenkins-job.sh login <persoal-token> jenkins-job-name <job-token> 'PARAMA=value&PARAMB=value'
#
# run one without params
# ./run-jenkins-job.sh <login> <login-token> <jenkins-job-name> <job-token>
#
# Add any job parameters there.
########################################################################
USER="$1"
TOKEN="$2"
JOB="$3"
JTOKEN="$4"
PARAMS="$5"
function usage {
echo "$0 <jenkins-login> <jenkins-login-token> <jenkins-job> <job-token> 'job_param_a=value&job_param_b=value'"
}
if [ "" == "${USER}" ]; then
usage
exit 1
fi
if [ "" == "${TOKEN}" ]; then
usage
exit 1
fi
if [ "" == "${JOB}" ]; then
usage
exit 1
fi
if [ "" == "${JTOKEN}" ]; then
usage
exit 1
fi
# jobs that have params need a different url and the params need to be on the query string
if [ "" == "${PARAMS}" ]; then
NOTIFY_URL="job/${JOB}/build"
else
NOTIFY_URL="job/${JOB}/buildWithParameters?${PARAMS}"
fi
CRUMB_ISSUER_URL='crumbIssuer/api/xml?xpath=concat(//crumbRequestField,":",//crumb)'
function notifyCI {
CISERVER=$1
# Check if "[X] Prevent Cross Site Request Forgery exploits" is activated
# so we can present a valid crumb or a proper header
HEADER="Content-Type:text/plain;charset=UTF-8"
CRUMB=$(curl --user ${USER}:${TOKEN} ${CISERVER}/${CRUMB_ISSUER_URL} 2>/dev/null)
if [ "$CRUMB" != "" ]; then
HEADER=$CRUMB
fi
curl -X POST ${CISERVER}/${NOTIFY_URL} --header "${HEADER}" --data-urlencode "token=${JTOKEN}" --user "${USER}:${TOKEN}"
echo "Done"
}
# The code above was placed in a function so you can easily notify multiple Jenkins/Hudson servers:
notifyCI "http://jenkins-server"


Sunday, October 19, 2014

Generating a token

I found a couple of ways to quickly generate a token for use with APIs. I needed a unique token for my Jenkins jobs and found these two useful.

r="";for e in $(seq 1 5); do r="${r}$(od -vAn -N4 -tu4 < /dev/urandom|sed 's| ||g')"; done;echo $r|openssl sha1 -sha256

d81697b6322a81a7fb19e0ef1141f534da0634244e76b5590332a1a186c7c4a9

r="";for e in $(seq 1 10); do r="$r${RANDOM}"; done; echo $r|openssl sha1 -sha256

Wednesday, July 23, 2014

creating rpms with a specfile from a repository

I was working with my varnish vmod today, attempting to package it into rpm. I found a useful tutorial here: https://fedoraproject.org/wiki/How_to_create_an_RPM_package

I tend to be a bit impatient though and I didn't read everything. I wanted to have my rpmbuild process checkout the code from a repo and then to ./configure && make but I ended up getting this odd error:

error: No "Source:" tag in the spec file

It took a little while to get to the bottom of things, but I eventually discovered by having the following macro in my .spec file 

%setup -q

caused rpmbuild to need a Source: line. Once I took that line out, all was good again. The spec files I built are for libmaxmindb, varnish cache and a varnish module. The code is here:

https://github.com/russellsimpkins/varnish-mmdb-vmod


Friday, July 18, 2014

Working with Grids

While working on a course of Algorithms and data structures I had the need to represent a grid of x/y in an array. After a bit of thought I realized that for any grid of size N, you can represent the grid as an N*N length array. To convert an array element into X/Y coordinates, you do the following:

x = p/N
y = p%n

To get the array position p from an x/y coordinate:

p = N*x+y


Wednesday, June 25, 2014

How to determine which yum package to install

If you have a library that's missing and can't quite figure out what provides the file, you can ask yum:

yum whatprovides '*/libz.so.1"

32 bit compatability

If you don't install rpms a lot, you get a little rusty. If you have a 32 bit program on your 64 bit server (or vice versa) you will know it when you get an error like this:

/lib/ld-linux.so.2: bad ELF interpreter: No such file or directory

If you link to the wrong version. Say you're file is linked to a 64 bit version at compile time and you alter the LD_LIBRARY_PATH (thinking you're going to fix it,) you will resolve the link, but since its not the right version you will get:

wrong ELF class:

To install the 32 bit compatibility, you should install the .i686 version e.g.

yum install -y libz.i686

You can also add the compatibility while you're at it:

 yum groupinstall "Compatibility libraries"

Wednesday, April 30, 2014

How to get a core dump


If your process is segfaulting, and you aren't getting a core dump, you need to make sure the kernel will allow them. First make sure that ulimit allows core files. 


$> ulimit -a 
core file size          (blocks, -c) 0

That says that core files are not allowed. You can set that for the current session by running:

$> ulimit -c unlimited 

Though you may be better off updating editing /etc/security/limits.conf. The next thing to set is fs.suid_dumpable and kernel.core_pattern. see: http://man7.org/linux/man-pages/man5/core.5.html


$> sysctl -w fs.suid_dumpable=2
$> sysctl -w kernel.core_pattern=/tmp/core

When setting the core_pattern, make sure that directory, in this case /tmp, is writable by the user process you want a core dump from. Be careful that your start script is not overriding anything anything. I found that /etc/init.d/functions daemon function set's the -c option and you need to export DAEMON_COREFILE_LIMIT=unlimited in your start script.

Once you have a core file, you can analyze with gdb e.g. 

$> gdb /tmp/core.0231
$> bt


The backtrace may require you to install debug symbols with debug-info. To do that I had to run:

$> yum install yum-utils

Then I could run debuginfo-install e.g. 

$> debuginfo-install httpd-2.2.15-30.el6.centos.x86_64


Saturday, March 1, 2014

How to generate analyze statements for Oracle

Sometimes you get stuck with DBAs that can't be bothered to analyze your tables or indexes. There is an easy way to generate your own analyze statements. This example is assuming you're using sqlplus. Remove <TABLE_OWNER> with you're own table owner.

set pagesize 0
set linesize 255
spool /tmp/analyze-table.sql

select 'analyze table <TABLE_OWNER>.' || object_name || ' compute statistics;' from all_objects where owner like '<TABLE_OWNER>%' and object_type = 'TABLE';
exit

You will now have all the statements you need in /tmp/analyze-table.sql

Thursday, January 23, 2014