Tuesday, July 22, 2014

elasticsearch unassigned shard

I do not understand the intricacies of Elasticsearch, but this is one thing that may help contribute to anyone else having problems with unassigned shards of data. If you have either the “head” or “kopf” plugins installed (graphical web interface for administering an elasticsearch cluster), I had some shards showing up at the bottom of the list of nodes in a row called “unassigned”.


Quick searches didn’t yield much, but I tried to “close” the offending index. And they got integrated into the nodes for that index and the cluster health went from “yellow” to “green” and things seem happy again.


Here are a few search results I had reviewed (but they seemed more involved than I was hoping to get into):


http://ift.tt/1z07iV4


http://ift.tt/1fodT1c


When I clicked on the unassigned shards, this is some of what showed up in kopf and head plugins:

state: UNASSIGNED

primary: false

node: null

relocating_node:

null shard: 0





Saturday, July 19, 2014

Thursday, July 17, 2014

Salt master does nothing when executing a state.highstate

In my case, it seems to have been a bug as described here: http://ift.tt/1zLJNjK


I could see errors in /var/log/salt/master that looked like this:

Received function _ext_nodes which is unavailable on the master, returning False


I did have a few minions that were on 2014.1.4 while my master was on 2014.1.5.


The fix described in the above post is to effectively just paste in a dummy function by that name:


find where the file master.py is and then inside that (there may be 2, so search for the one applicable here), insert the following just above “def _master_opts”:


def _ext_nodes(self, load):

”’

Stub out until 2014.1.5 minion installed

”’

return {}





Ubuntu install for Salt-Minion 2014.1.7 still doesn’t include the config file

There’s no minion config file at /etc/salt/minion after installing via Apt. I had to copy one over from a CentOS machine. Seems to work fine then, fwiw.


I did this:

apt-get install salt-minion


When it’s done. No config file. And, yeah, you see lots of unhappy log lines about not being able to find the host “salt” (if you have to specify an IP to the salt master).





Run multiple commands in one salt state

Saltstack’s cmd.run is great, but what if you want to run multiple commands and you don’t want to mess with a script?


The only way I could get it to work was to have separate cmd.run blocks written in the order you want things executed. An example would be if you wanted to kill and restart a process every highstate:


Kill the process:

cmd.run:

– name: killall myproc.sh


Wait a few seconds:

cmd.run:

– name: sleep 3s


Start the process:

cmd.run:

– name: /var/lib/myproc/myproc.sh


You can have multiple commands run from one block, but these commands seem to get executed in random order:


Kill then wait then restart:

cmd.run:

– names:

– killall myproc.sh

– sleep 3s

– /var/lib/myproc/myproc.sh





Wednesday, July 16, 2014

Tuesday, July 15, 2014

Restoring a SQL Server DB that has multiple files

While restoring a forums DB, we hit this error:


File ‘D:\MSSQL\Data\my_forum_db_file.mdf’ is claimed by ‘ftrow_file_new’(3) and ‘my_forum_db_file’(1). The WITH MOVE clause can be used to relocate one or more files.


In this case, the problem was because by default the restore database wizard doesn’t automatically change the destination file name of the full text index data file. In the Options section of the restore database window, just change the destination file name to something different than the main data file of the database.


I know – this may not be too clear, but you should see what I’m talking about when you right-click on the DB instance > Restore DB > point at the backup file > Options on the left top.





Friday, July 11, 2014

Scheduling a state.highstate on all minions with SaltStack schedulers

It was not obvious to me in the docs exactly where to put scheduling “stuff”. I was just looking at the scheduling docs page, so I’m sure some other doc mentioned this. Here’s how to get you started in case you don’t want to read more ;)


nano /srv/pillar/top.sls


base:


‘*’:

– schedule


mkdir /srv/pillar/schedule

nano /srv/pillar/schedule/init.sls


schedule:

highstate:

function: state.highstate

minutes: 60

maxrunning: 1


All that stuff will then result in all your minions running a highstate every 60 minutes. You can obviously filter by changing the ‘*’ to whatever partial minion name you want, just like with salt states.


You can see these schedules in each minion’s pillar data:

salt ‘lbtest*’ pillar.data. For me, it showed up at the bottom of the returned data.


I figure this could or should replace cron jobs, say, for database backups, though I’m not sure if there are any *serious* dangers of the salt minion dying or otherwise failing.





Windows Firewall – which rule allows inbound ICMP “Pings”?

Wednesday, July 9, 2014

Windows Event Logs – looking for drive-by hacking attempts

Here are a few event IDs to look for:


4625 – classic failed logon attempt

5156 – means your computer permitted the connection (look at your firewall to see if you’re allowing inbound connections from IP ranges you really don’t need to allow – more on that below)


A good way to know what your Windows machine’s firewall is allowing inbound is to do this:


Server Manager > Configuration > Windows Firewall with Advanced Security > Inbound Rules > then on the right go Filter by State > Filter by Enabled > then sort everything by the “Remote Address” column


Then, just go down the list and look for Remote Address ranges you really don’t need to allow in to your machine. Generally speaking, make everything either “Local subnet” or specific IP addresses/ranges you *know* you need to allow in to this machine. If you see “Any”, then that local port should be something like 80, 8080, 443, or something you actively want every computer on the entire intarwebs to be able to access.





Tuesday, July 8, 2014

Salt Stack for monitoring

I just have to believe Salt Stack is a great low-overhead and simple way to monitor servers.


One way you can start with this is by using the PsUtil interface module that comes with Salt now.


I had come across this slide deck. It seemed to involved for my liking, but something about the Salt Scheduler and this mention of commands that start with “ps.” caught my eye.


Well, Salt has a built-in module that will return various PsUtil outputs. You have to have PsUtil installed on each minion though… specifically the Python version. Here’s how I did it on CentOS 6.5:


yum install python-psutil.x86_64


you could do this concurrently on all Salt minions by typing this on the salt master:

salt ‘*’ cmd.run ‘python-psutil.x86_64 -y’


Or, you could make a global Salt State that applies to all minions that looks something like this (I put this in a folder called globalpackageinstalls/init.sls):


python-psutil.x86_64:

pkg.installed:

– pkg_verify: True


and then in your top.sls you would put something like this:


base:

‘*’:

– globalpackageinstalls


Whatever the case, once the python version of psutils is installed, you can run any of the commands for all minions at once. Making use of the “Returners” feature should allow you to get the results into something like ElasticSearch.


salt ‘*’ ps.cpu_percent

salt ‘*’ ps.disk_partition_usage


This is great and all, but alerting is sort of what you really really want in order to be proactive. I’m not sure how to do that *easily*. But maybe alerts are never really all that easy to set up and manage… bleh





Friday, July 4, 2014

salt 1.5 in Ubuntu installed via apt-get doesn’t seem to install /etc/salt/minion

Yeah, it’s odd. I don’t see any minion config file at /etc/salt/minion


Must be an install bug… that or I’m totally missing something. But that’s where the config file is supposed to be.


I just type “apt-get install salt-minion”. No file at /etc/salt/minion





Tuesday, July 1, 2014

Salt 1.5 released on yum today… well, I just noticed it today and it wasn’t there a few days ago

I’m excited about the push_dir feature that I think could be used to do filesystem backups :)


It pushes files to a cache directory on the salt master. Then it should just be a case of rsyncing those directories to a long-term storage server. I’m going to see if I can change the target directory used with push_dir.