Concrete5 CMS. Webdesign using a CMS. Notes to remind me.

These notes are more for my notes than anything else. However someone might find them useful.

Welsh Flag Internationalization package

I found that when creating a bilingual site, British English and Welsh, the Welsh flag icon was actually the UK Union Flag. Despite this, the correct welsh flag icon was located in ./images/flags/wales.png. So, taking a look at the mySQL table for the website database;

mysql> select * from MultilingualSections;
+-----+------------+--------+----------+
| cID | msLanguage | msIcon | msLocale |
+-----+------------+--------+----------+
| 145 | en_GB | GB | en_GB_GB |
| 146 | cy | GB | cy_GB |
+-----+------------+--------+----------+
2 rows in set (0.00 sec)

So, in order to resolve this, I simply changed the msIcon value for cy;

mysql> update MultilingualSections SET msIcon = 'wales' where msLanguage = 'cy'; 
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0

mysql> select * from MultilingualSections;
+-----+------------+--------+----------+
| cID | msLanguage | msIcon | msLocale |
+-----+------------+--------+----------+
| 145 | en_GB | GB | en_GB_GB |
| 146 | cy | wales | cy_GB |
+-----+------------+--------+----------+
2 rows in set (0.00 sec)

Pretty URLS

In order to get rid of the index.php, you need to turn on 'Pretty URLS' in System and Settings > SEO and Statistics. This is especially important when wanting to use the internationalization package. Once you do this, you will probably find that your site doesn't work any more. To fix this, you need to edit;

#sudo vi /etc/httpd/conf/httpd.conf

and add the following..

<Directory "/var/www/html">'
Options +FollowSymLinks
AllowOverride all
Order deny,allow
Allow from all
RewriteEngine On
</Directory>

assuming that /var/www/html is the root directory of your concrete5 installation.

301 redirect

I know, this isn't anything to do with concrete5, but it had a weird consequence when it was not set. I had vhost set up in http.conf so that;

ServerName example.co.uk
ServerAlias www.example.co.uk

This has always worked fine, and avoids that embarrassment when one of your clients websites shows in preference to another one. However, what was happening on one of my concrete5 sites was that a custom font was not being found when accessing example.co.uk. To resolve this it is best to use 301 redirect, which is recommended practice. To do this, add the following to the .htaccess file in the root directory of your website(s);

RewriteEngine On
RewriteCond %{HTTP_HOST} ^example.com
RewriteRule (.*) http://www.example.com/$1 [R=301,L]

I also added the following to http.conf;

<Directory "/services/httpd/www.example.com/html">
Options +FollowSymLinks
AllowOverride all
Order deny,allow
Allow from all
RewriteEngine On
</Directory>

Installing on Solaris 11

What a pain this can be!! I assume that you have csw, apache and php5 already installed.

#unzip it whereever your apache server is pointing
unzip concrete5.zip

#sort permissions out;
chown -R nobody files packages config

#install x11
pkg install pkg:/x11/library/[email protected]

#you need to install the CSWgd library
/opt/csw/bin/pkgutil -i CSWgd

#install mysql and php interface for it
pkg install mysql-51
/opt/csw/bin/pkgutil -i CSWphp5-mysql

#add the following extensions to your php.ini file;
vi /etc/opt/csw/php5/php.ini
extension=gd.so
extension=mysql.so

#restart apache
/opt/csw/apache2/sbin/apachectl restart

With any luck, when you go to the site in your web browser, it will pass the tests and you should be okay to proceed with installation. So let's make the databases;

#if you haven't yet, set a root password
/usr/mysql/5.1/bin/mysqladmin -u root password NEWPASSWORD

#within mysql, set up database;
mysql -u root -p
mysql> create database c5db;

#add user...clearly change the password to something sensible
mysql> grant usage on *.* to [email protected] identified by 'c5password';

#give user permissions
mysql> grant all privileges on c5db.* to [email protected];

There we are, you can enter this information into your installation and c5 will do the rest...hopefully

My Sun Grid Engine (SGE) Setup and Admin Notes

Queue management software for HPC

These are more for myself than anything else, but someone might find them useful as it's difficult to find SGE documentation amid all the LSF stuff too. e.g. qsub is the same for both, but different parameters.

Initial Configuration
With a default installation of SGE, here are the changes made for the cluster in IMAPS;

Fair-share functional policy
This may not be the best, however it is better than FIFO.

Activate the functional share policy by specifying the number of functional share tickets. The value is arbitrary in that any value greater than zero will trigger it. But it needs to be suitably larger so that they can be shared out to users evenly. e.g. if you have 10 users, and each has 100 tickets, then it needs to be at least 10000. So, as root (or SGE admin), edit the config file;

[[email protected] ~]# qconf -msconf

and set the following;

weight_tickets_functional 1000000

Now we need to assign users some tickets. To do this, edit a different config;

[[email protected] ~]# qconf -mconf

and set the following;

enforce_user auto
auto_user_fshare 100

This gives each user 100 tickets.

NOTE: When I did this I played around a lot to see its behaviour. I found that once a user has submitted a job through SGE without the assignment of 100 tickets (e.g. if you remove that to see what happens), they will never get these assigned after you turn it on. So you need to add the shares for that user manually. (there may be a better way).

[[email protected] ~]# qconf -muser username

and that will give you something like;

name user
oticket 0
fshare 100
delete_time 1358000970
default_project NONE 

Setting up memory allocation

This is simple in practice, but it has a couple of issues to be aware of. I did this using the popular h_vmem method. However, I may change this at some point. The reason being is that it assumes that h_vmem is both what we want to use, and what the limit is. This may not be the case. e.g. If you have a job that initialises for a few mins, peaks at 4GB, but then only uses 2G for the next 2 weeks, then it'd be a waste of resources for an eight core machine with 16GB ram (e.g. the nodes I have on the IMAPS machine). For now this will be the case until I can give it some serious thought.

First you need to make sure that h_vmem is a consumable resource;

[[email protected] ~]# qconf -mc

#name shortcut type relop requestable consumable default urgency
#----------------------------------------------------------------------------------------
..
h_vmem h_vmem MEMORY <= YES YES 0 0

Now that you've done that, you need to add this resource as a complex to each node like so. (clearly you can write a script to do it to every node).

[[email protected] ~]# qconf -me node001

hostname node001
load_scaling NONE
complex_values h_vmem=16G
user_lists NONE
xuser_lists NONE
projects NONE
xprojects NONE
usage_scaling NONE
report_variables NONE

As you can see, I've added h_vmem=16G to node001. This is the amount of consumable memory that can be allocated.

WARNING: Once you do this, h_vmem HAS to be set on all jobs, otherwise they will fail. To combat this for the forgetful, lets add a default value by editing the sge_request file. So locate and open the file in your editor;

[[email protected] ~]# which qsub
/cvos/shared/apps/sge/6.1/bin/lx26-amd64/qsub

[[email protected] ~]# vi /cvos/shared/apps/sge/6.1/default/common/sge_request

Add to the bottom the following;

# default memory limit 
-l h_vmem=2G

And this will now give a 2G limit to every job unless otherwise stated.

Issue with IDL and h_vmem

So there is an issue with this method, for some reason IDL won't start even when specifying a very large amount of memory. This is all to do with the h_stack flag. To stop this being an issue, add the following line to the sge_request file;

# default stack size (otherwise IDL and Matlab fail to start)
-l h_stack=128m

Clearly if the stack size is not enough for some programs, then users can specify a larger stack using the -l h_stack flag. I did have one user running some perl code that needed 512mb stack space.

Allowing reservations

qconf -msconf

Change max_reservation from 0 to a number, in this case I've chosen 32

Change default_duration from INFINITY to something very long,

Finding that a node isn't accepting jobs

If you discover that a node isn't accepting jobs, here is what it may be;

qstat -f //gives full status of nodes

If you see;

[[email protected]]# qstat -f
queuename qtype resv/used/tot. load_avg arch states
----------------------------------------------------------------
[email protected] BIP 0/9/32 12.78 lx26-amd64 
[email protected] BIP 0/0/32 0.0 lx26-amd64 d

Then we know that node02 is disabled. To bring it back on we basically re-enable a queue, which goes through and enables all nodes in that queue;

[[email protected]]# qmod -e queue.q

Queue queue "queue-node01.q" has been enabled by [email protected]

root - queue "queue-node02.q" is already enabled

You can also of course disable a queue;

[[email protected]]# qmod -d queue.q

NB: This needs to be run on your master node as root.

Changing nodes in a queue

OK, this is simple. Just type;

qconf -mq queue.q

This will bring up a vi like editor. Then change the second line;

hostlist node05.cluster node06.cluster node07.cluster

...to whatever nodes you wish to have on it.

Jobs pending

So you notice that there are lots of things in the queue pending. To check what why a job isn't running type;

qacct -j JOBNUMBER

And it will tell you why.

If you see lots of;

queue "all.q" dropped because it is temporarily not available queue "all.q" dropped
because it is temporarily not available queue "all.q" dropped because it is 
temporarily not available

Then type:

qstat -f

This will tell you the status of nodes. If they are in E status it will also tell you why. Usually that a job caused it to stop. e.g.

queue queue.q marked QERROR as result of job 123456's failure at host at node001

You can move them out of error by typing;

qmod -cq all.q

Move queue

If you would like to move a job from one queue to another, you can do this by;

qalter -q all.q 173143

Where all.q is the queue you wish to move it to, and 173143 is the job-id number.

Getting some stats

Basically I wanted to measure the performance of various job submissions with different thread counts. Each run producted an output and error file in the form file_threadnumber.o12345, where 12345 is the job number and threadnumber is the number of threads the program used. This, quite large, one liner, does the following;

Lists the *.o* files
Gets the job number
Gets the job details and searches for vmem, start and end times
Removes the line return so that everything remains on one line
Sorts the whole lot by thread number
Splits up the times into seconds, calculates the time taken and prints everything.

ls -1 *.o* | awk '{split($1,a,"_"); split(a[3],b,"."); split(b[2],c,"o"); printf "Threads "b[1]" "; system("qacct -j "c[2]" | grep \"vmem\\|start_time\\|end_time\" | tr -d \"\\n\""); print ""}' | sort -nk 2 | awk '{split($7,a,":"); split($12,b,":"); print $1" "$2" "(b[1]*60*60+b[2]*60+b[3])-(a[1]*60*60+a[2]*60+a[3])" "$14}'

Probably a bit long winded, but who doesn't like a good one-liner...

Further Reading
Things I have found useful;

http://ait.web.psi.ch/services/linux/hpc/merlin3/sge/admin/

A couple of things I found useful when doing some desktop support

Cab Extraction

I've occasionally had to get drivers from very large cab files, usually DELL ones. This takes so long in Windows, so I do it in Linux once using cabextract and then I can share it using the samba server, rather than extracting on every Windows machine. This is a great piece of software, very useful. It's really simple to use.

NFS Group permissions not working

Despite the UID permissions working correctly, and all the UID/GIDs matching up perfectly, a Ubuntu file store I was exporting via NFS would not work with the existing group permissions of the client. To solve this, on the NFS file store, comment out;

--manage-gid

option from

/etc/default/nfs-kernel-server

Restart nfs-kernel-server and it all should work fine.

UNIX One Liners .... and the occasional short script.

These are thing that I have occasionally found useful.

Looking for last accessed files

This was part of my reporting so that I could try and get people to remove some data from there storage space on our HPC. This one-liner reports in GB the amount of storage taken up by files that haven't been accessed in over 1 year.

NOTE: I think there may be a bug in this, will need to check.

find . -atime +365 -exec ls -ltr '{}' \; | sh | awk '$NF {c+=$5} END {print c/1073741824 " GB"}'

To get user data, run this in the directory containing users home directory as root.

#!/usr/bin/bash
for i in `ls -1`; do
find $i -atime +365 -exec ls -ltr '{}' \; | \
awk '"'$NF'" {c+="'$5'"} END {print \"$i \" c/1073741824 \" GB\"}'
done

Splitting data

This assumes a stream of one column data that one may wish to reformat as a float with 3.d.p. and then takes every 266 and makes that a row. Therefore if there were 266*266 values in the file, you would get a 266*266 matrix.

cat file | awk '{printf "%.3f\n", $i}' | xargs -n 266

Merge two files

Assume you have two files where the rows represent the same entry, but the columns are in different files....so we'd like to join them.

pr -m -t -s" " file1 file2 | gawk '{print $0}'

Check a bunch of servers for almost full partitions

Assumes you have set up the ssh keys so you don't need to enter the password.

#!/usr/bin/bash
THRESHOLD=90
for i in server1.domain.net server2.domain.net server3.domain.net; do
ssh -q [email protected]$i 'df -hP' | \
grep -v Filesystem | \
sed 's/%//g' | \
awk '{ if($5>='"$THRESHOLD"') print "Warning: Low disk on ""'"$i"'" $5}'
done

DVD to iso, and back again

dd if=/dev/dvd of=myiso.iso

cdrecord -v -dao speed=1 dev=/dev/dvd myiso.iso
Newer posts → Home ← Older posts