Showing posts with label c. Show all posts
Showing posts with label c. Show all posts
Monday, August 23, 2010
a contribution
It seems a little patch I sent to the btpd project more than a year ago is finally accepted: commit, original message
Thursday, April 22, 2010
Evolution of uguu scanning process
Scanning is very essential part of a search engine. It is about how database is filled and updated. You can't search if nothing was scanned before. In this post I'd like to cover how scanning in uguu is implemented and how it has evolved over time.
From the very start it was decided that scanning should be split between two programs. The first one connects to a share, somehow constructs a list of contents of the share and prints it to the standard output. The second one is protocol-independent script which reads contents from the standard input and executes a sequence of SQL commands to insert contents into the database. This sounded very reasonable that time since I had already written smbscan in C and I didn't want to deal with SQL in C. Python appeared to be easier for this task. Also, simple text output helps a lot with testing and debugging.
smbscan itself also consisted of protocol dependent part (connecting to a share, go up, go to subdir, readdir) and independent one (constructing file tree). When Radist (the other one uguu developer) wanted to port his ftp scanner to uguu I asked him to use tree constructor from smbscan so we could change scanner output format more easily if necessary. Later this common part evolved into libuguu.
Now it's time to turn back to the scanning process. At first we thought that when time to scan a share comes all contents of the share should be dropped from the database and the share itself should be rescanned by protocol-dependent scanner having it's output read by the database updating script. If some error occurs SQL transaction rollback is executed and scanning process is marked as failed. If the scanner report success then changes are committed to the database. All described process is done inside a single SQL transaction so there is no need to care about what our users see in the middle of scanning process - they see old version until new one is committed.
When we first launched uguu it appeared scanning was very slow. Some optimizations were required. First idea came to my mind was not to perform database updating if contents of a share were unchanged. I thought to execute diff on scanner outputs saved to files, but Radist suggested comparing only security hash digests. We started to keep SHA1 digest of the scanner output in the database and checked against it before each update. Since many shares don't change their contents often uguu gained a big speedup here.
By that time I thought "If it were possible to diff old and new scanner outputs and commit only actual changes...". But scanning speed wasn't the biggest problem anymore - we needed to improve search time next. At the beginning of April database schema change was committed and scanning became major issue again.
Next long-awaited feature was automatic recursion detection in the tree constructor. The source of the problem is that filesystem links are not always detectable by the client. Suppose you have a directory A and you create a symbolic link B in it pointed to A. Then SMB protocol client would see that B is just a plain directory and it has a subdirectory B and so on infinitely. If A has a huge subtree then it is copied each time SMB client enters B resulting very huge scanner output and of course big database workload. Solution was to calculate MD5 digest for each directory and compare against it's ancestors. If a match occurs then directory parent's MD5 is compared to that ancestor's MD5. Recursion is said to be detected if along the path to the root we find a chain of directories which MD5s are the same as of such chain moved to the end of the path (current directory) and overall number of items in that chain's directories is bigger than a certain threshold. The latter condition provides a scale for a chain size: from 1 if linked directory is large to the threshold if there is a directory with only one child - the link itself.
Recursion detection is very useful feature if there are shares with recursive links. But there are very few such shares in our environment.
The next idea was quite simple to implement: if two shares have the same SHA1 digest then keep only one copy of contents in the database. This was supposed to reduce amount of files in the database for hosts with same contents via both SMB and FTP. But in reality this wasn't very helpful since some changes can occur between SMB and FTP scanning and which is more frustrating when a filename is not good enough it is shown differently via SMB and FTP. The latter argument kicks the whole idea to the ground.
So the only last hope remained - to construct a diff against old contents. Compared to previous work this was a big task with a huge amount of new code in libuguu but it is finally done and currently is being tested on our installation.
Tree diffing is very straightforward: first the whole old tree is reconstructed from a file, then while constructing a new tree using protocol-dependent walker, tree constructor compares new contents against old ones. If a new directory is added (respectively an old one is deleted) tree constructor prints the whole subtree with prefix '+' (respectively '-'). Same goes for files. If only size or number of items in a directory (or just size in case of a file) has been changed then only the directory (file) is printed with '*' modifier. Strictly speaking, output is a bit more complex to help further processing by the database updating script.
Dullness of the algorithm can be viewed in a situation when just some directory's name has been changed and that directory has a huge subtree. Then two trees will be presented in diff (one with '-' prefix and the other with '+'). However in uguu we keep tsvector of entire path for each file to allow searching in full paths. So we would have to update the entire subtree anyway.
Now uguu has scanners for FTP, SMB and WebDAV (the latter was written last weekend and is still quite unreliable). All of them use the tree constructor from libuguu and thus have all the features described above.
PS. Thanks Radist for pointing out some errors.
From the very start it was decided that scanning should be split between two programs. The first one connects to a share, somehow constructs a list of contents of the share and prints it to the standard output. The second one is protocol-independent script which reads contents from the standard input and executes a sequence of SQL commands to insert contents into the database. This sounded very reasonable that time since I had already written smbscan in C and I didn't want to deal with SQL in C. Python appeared to be easier for this task. Also, simple text output helps a lot with testing and debugging.
smbscan itself also consisted of protocol dependent part (connecting to a share, go up, go to subdir, readdir) and independent one (constructing file tree). When Radist (the other one uguu developer) wanted to port his ftp scanner to uguu I asked him to use tree constructor from smbscan so we could change scanner output format more easily if necessary. Later this common part evolved into libuguu.
Now it's time to turn back to the scanning process. At first we thought that when time to scan a share comes all contents of the share should be dropped from the database and the share itself should be rescanned by protocol-dependent scanner having it's output read by the database updating script. If some error occurs SQL transaction rollback is executed and scanning process is marked as failed. If the scanner report success then changes are committed to the database. All described process is done inside a single SQL transaction so there is no need to care about what our users see in the middle of scanning process - they see old version until new one is committed.
When we first launched uguu it appeared scanning was very slow. Some optimizations were required. First idea came to my mind was not to perform database updating if contents of a share were unchanged. I thought to execute diff on scanner outputs saved to files, but Radist suggested comparing only security hash digests. We started to keep SHA1 digest of the scanner output in the database and checked against it before each update. Since many shares don't change their contents often uguu gained a big speedup here.
By that time I thought "If it were possible to diff old and new scanner outputs and commit only actual changes...". But scanning speed wasn't the biggest problem anymore - we needed to improve search time next. At the beginning of April database schema change was committed and scanning became major issue again.
Next long-awaited feature was automatic recursion detection in the tree constructor. The source of the problem is that filesystem links are not always detectable by the client. Suppose you have a directory A and you create a symbolic link B in it pointed to A. Then SMB protocol client would see that B is just a plain directory and it has a subdirectory B and so on infinitely. If A has a huge subtree then it is copied each time SMB client enters B resulting very huge scanner output and of course big database workload. Solution was to calculate MD5 digest for each directory and compare against it's ancestors. If a match occurs then directory parent's MD5 is compared to that ancestor's MD5. Recursion is said to be detected if along the path to the root we find a chain of directories which MD5s are the same as of such chain moved to the end of the path (current directory) and overall number of items in that chain's directories is bigger than a certain threshold. The latter condition provides a scale for a chain size: from 1 if linked directory is large to the threshold if there is a directory with only one child - the link itself.
Recursion detection is very useful feature if there are shares with recursive links. But there are very few such shares in our environment.
The next idea was quite simple to implement: if two shares have the same SHA1 digest then keep only one copy of contents in the database. This was supposed to reduce amount of files in the database for hosts with same contents via both SMB and FTP. But in reality this wasn't very helpful since some changes can occur between SMB and FTP scanning and which is more frustrating when a filename is not good enough it is shown differently via SMB and FTP. The latter argument kicks the whole idea to the ground.
So the only last hope remained - to construct a diff against old contents. Compared to previous work this was a big task with a huge amount of new code in libuguu but it is finally done and currently is being tested on our installation.
Tree diffing is very straightforward: first the whole old tree is reconstructed from a file, then while constructing a new tree using protocol-dependent walker, tree constructor compares new contents against old ones. If a new directory is added (respectively an old one is deleted) tree constructor prints the whole subtree with prefix '+' (respectively '-'). Same goes for files. If only size or number of items in a directory (or just size in case of a file) has been changed then only the directory (file) is printed with '*' modifier. Strictly speaking, output is a bit more complex to help further processing by the database updating script.
Dullness of the algorithm can be viewed in a situation when just some directory's name has been changed and that directory has a huge subtree. Then two trees will be presented in diff (one with '-' prefix and the other with '+'). However in uguu we keep tsvector of entire path for each file to allow searching in full paths. So we would have to update the entire subtree anyway.
Now uguu has scanners for FTP, SMB and WebDAV (the latter was written last weekend and is still quite unreliable). All of them use the tree constructor from libuguu and thus have all the features described above.
PS. Thanks Radist for pointing out some errors.
Monday, January 11, 2010
uguu - search engine for local networks
I've spent new year holidays on designing and writing (alongside with another one developer) a local searcher which would allow quick search and browsing through SMB and FTP shares. For years we were using http://sourceforge.net/projects/fsmbsearch but I don't like neither it's design which almost doesn't rely on relational database (I may be wrong here and it would be interesting to find out the answer) nor it's implementation when installation requires dealing with autohell, patching some outdated version of samba and getting loads of perl modules.
The new search engine called uguu is entirely open source and is available at http://code.google.com/p/uguu. It uses libsmbclient and small amount of C code for scanning SMB share, postgres for database, python for scripts and django as web framework.
It is still very far way to go (for example, no search page yet) and there are some things I plan to do myself, but if you want to contribute in any way feel free to join google group devoted to this project: http://groups.google.com/group/uguu
The new search engine called uguu is entirely open source and is available at http://code.google.com/p/uguu. It uses libsmbclient and small amount of C code for scanning SMB share, postgres for database, python for scripts and django as web framework.
It is still very far way to go (for example, no search page yet) and there are some things I plan to do myself, but if you want to contribute in any way feel free to join google group devoted to this project: http://groups.google.com/group/uguu
Saturday, October 31, 2009
Spy mode inotify
Although linux inotify(7) user-space interface allows to catch events on file operations it doesn't provide a way to answer a simple question: who the hell is accessing my files. By 'who' I mean which process, of course. So if we want to get an answer for that question we have to go into kernel space. (Constantly doing 'lsof|grep' is out of question here.)
Little thinking give us a hope this can be done with quite simple kernel module. Indeed, inotify handler must be called just in the moment a file is accessed, meaning inotify handler is running in context of the process which is accessing a file. So our inotify handler can just output to dmesg desired fields from 'current' task. This should work until some sophisticated buffering is introduced between file operations and inotify handlers.
Kernel provides an easy way to use self-written inotify handler in a kernel module. See Documentation/filesystems/inotify.txt for details.
The code below just exhibits the described approach works. For trying it do
Then access that file and watch dmesg for messages like the following
spy_inotify.c
Little thinking give us a hope this can be done with quite simple kernel module. Indeed, inotify handler must be called just in the moment a file is accessed, meaning inotify handler is running in context of the process which is accessing a file. So our inotify handler can just output to dmesg desired fields from 'current' task. This should work until some sophisticated buffering is introduced between file operations and inotify handlers.
Kernel provides an easy way to use self-written inotify handler in a kernel module. See Documentation/filesystems/inotify.txt for details.
The code below just exhibits the described approach works. For trying it do
insmod spy_inotify.ko path=file_to_spy_on
Then access that file and watch dmesg for messages like the following
[spy_inotify] Catched inotify event for file ../../cpufreq.c. mask=32, pid=8183 executable="less"
spy_inotify.c
/* spy_inotify - linux kernel `module for spying on inotify events * prints to dmesg _who_ exactly is doing something with a file * * Copyright (C) 2009 Ruslan Savchenko * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA */ #include <linux/module.h> #include <linux/kernel.h> #include <linux/fs.h> #include <linux/namei.h> #include <linux/inotify.h> #include <linux/sched.h> #include <linux/string.h> #define SPY_INOTIFY "[spy_inotify] " #define MASK (IN_ACCESS | IN_ATTRIB | IN_CLOSE_WRITE | IN_CLOSE_NOWRITE \ | IN_CREATE | IN_DELETE | IN_DELETE_SELF | IN_MODIFY \ | IN_MOVE_SELF | IN_MOVED_FROM | IN_MOVED_TO | IN_OPEN ) static char *path = "/"; module_param(path, charp, 0); MODULE_PARM_DESC(path, "Path to spy at"); static char comm[TASK_COMM_LEN]; void sinotify_event(struct inotify_watch *watch, u32 wd, u32 mask, u32 cookie, const char *name, struct inode *inode); void sinotify_destroy(struct inotify_watch *watch); static struct inotify_operations si_op = { sinotify_event, sinotify_destroy }; void sinotify_event(struct inotify_watch *watch, u32 wd, u32 mask, u32 cookie, const char *name, struct inode *inode) { task_lock(current); strncpy(comm, current->comm, sizeof(current->comm)); task_unlock(current); printk(KERN_INFO SPY_INOTIFY "Catched inotify event for file %s. mask=%d, pid=%d executable=\"%s\"\n", path, mask, current->tgid, comm); } static struct inotify_handle *ih = 0; static struct inotify_watch watch; void sinotify_destroy(struct inotify_watch *watch) { } int init_sinotify(void) { struct path s_path; int err; printk(KERN_INFO SPY_INOTIFY "Spying at %s\n", path); if ((err = kern_path(path, LOOKUP_FOLLOW, &s_path)) != 0) { printk(KERN_ERR SPY_INOTIFY "kern_path() returned %d\n", err); goto out; } /* FIXME: crappy nullcheck and no actions on error. Odd */ if ((s_path.dentry == NULL) || (s_path.dentry->d_inode == NULL)) { printk(KERN_ERR SPY_INOTIFY "NULL dentry or inode\n"); err = -ENOENT; goto out; } ih = inotify_init(&si_op); if (IS_ERR(ih)) { printk(KERN_ERR SPY_INOTIFY "inotify_init() returned bad ptr\n"); err = PTR_ERR(ih); goto out; } inotify_init_watch(&watch); err = inotify_add_watch(ih, &watch, s_path.dentry->d_inode, MASK); if (err < 0) { printk(KERN_ERR SPY_INOTIFY "inotify_add_watch() returned bad descriptor: %d\n", err); goto clean_ih; } return 0; clean_ih: inotify_destroy(ih); out: return err; } void exit_sinotify(void) { inotify_rm_watch(ih, &watch); inotify_destroy(ih); printk(KERN_INFO SPY_INOTIFY "End spying\n"); } module_init(init_sinotify); module_exit(exit_sinotify); MODULE_LICENSE("GPL"); MODULE_AUTHOR("Ruslan Savchenko"); MODULE_DESCRIPTION("Spy-mode inotify");Makefile
obj-m += spy_inotify.o all: make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules clean: make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean
Saturday, August 8, 2009
LANG vs LC_MESSAGES in man(1) error messages
At first I'd like to say that I'm pretty happy with english being my system's language. But since I have to use russian sometimes I've set LANG=ru_RU.UTF-8. Unfortunately man(1) error messages are encoded in KOI8-R so they look like garbage on my locale. I don't particularly want them to be in russian so I've set LC_MESSAGE=C. The man page locale(1P) says:
But it doesn't work for man(1). After little investigation I've learned that man(1) uses catgets(3) to get an error string. The catalog descriptor for catgets(3) is received from catopen(3) which (in case $LC_MESSAGES=C) carefully tries to open /usr/share/locale/C and some other dirs with C locale and finally fallbacks to /usr/share/locale/ru. If man(1) obtains an empty string from catgets(3) it looks for it in builtin table which is what I need.
As for now I can't say which part of this complicated situation is buggy, so I've inserted a hack into man(1) to get an error string right from builtin table if the locale is C or POSIX.
Here is my patch.
LC_MESSAGES
Determine the locale that should be used to affect the format
and contents of diagnostic messages written to standard error.
But it doesn't work for man(1). After little investigation I've learned that man(1) uses catgets(3) to get an error string. The catalog descriptor for catgets(3) is received from catopen(3) which (in case $LC_MESSAGES=C) carefully tries to open /usr/share/locale/C and some other dirs with C locale and finally fallbacks to /usr/share/locale/ru. If man(1) obtains an empty string from catgets(3) it looks for it in builtin table which is what I need.
As for now I can't say which part of this complicated situation is buggy, so I've inserted a hack into man(1) to get an error string right from builtin table if the locale is C or POSIX.
Here is my patch.
diff -rNu a/man-1.6f/src/gripes.c b/man-1.6f/src/gripes.c
--- a/man-1.6f/src/gripes.c 2006-11-21 22:53:44.000000000 +0300
+++ b/man-1.6f/src/gripes.c 2009-08-11 10:36:08.000000000 +0400
@@ -99,15 +99,22 @@
static char *
getmsg (int n) {
char *s = "";
-
- catinit ();
- if (catfd != (nl_catd) -1) {
- s = catgets(catfd, 1, n, "");
- if (*s && is_suspect(s))
- s = "";
- }
- if (*s == 0 && 0 < n && n <= MAXMSG)
- s = msg[n];
+ char *lm;
+
+ lm = getenv("LC_MESSAGES");
+ if (lm && (!strcmp(lm, "C") || !strcmp(lm, "POSIX"))) {
+ if (0 < n && n <= MAXMSG)
+ s = msg[n];
+ } else {
+ catinit ();
+ if (catfd != (nl_catd) -1) {
+ s = catgets(catfd, 1, n, "");
+ if (*s && is_suspect(s))
+ s = "";
+ }
+ if (*s == 0 && 0 < n && n <= MAXMSG)
+ s = msg[n];
+ }
if (*s == 0) {
fprintf(stderr,
"man: internal error - cannot find message %d\n", n);
Monday, February 23, 2009
mplayer screenshot name
About a year ago a friend of mine complained of the filename style for screenshots by mplayer. He said it should contain the original file name and a timestamp (so one could easily watch screenshoted scene once again). I hacked mplayer but unfortunately the patch didn't get accepted to the mainline.
Here is the patch anyway:
Here is the patch anyway:
Index: libmpcodecs/vf_screenshot.c
===================================================================
--- libmpcodecs/vf_screenshot.c (revision 28346)
+++ libmpcodecs/vf_screenshot.c (working copy)
@@ -22,9 +22,19 @@
#include "libswscale/swscale.h"
#include "libavcodec/avcodec.h"
+#include "m_option.h"
+#include "m_struct.h"
+
+#include "mplayer.h"
+
+#define SHOT_FNAME_LENGTH 102
+#define TIMESTAMP_LENGTH 20
+
struct vf_priv_s {
int frameno;
- char fname[102];
+ char *fname;
+ char *basename;
+ int style;
/// shot stores current screenshot mode:
/// 0: don't take screenshots
/// 1: take single screenshot, reset to 0 afterwards
@@ -36,7 +46,7 @@
AVCodecContext *avctx;
uint8_t *outbuffer;
int outbuffer_size;
-};
+} vf_priv_dflt;
//===========================================================================//
@@ -92,14 +102,65 @@
else return 0;
}
-static void gen_fname(struct vf_priv_s* priv)
+static void gen_fname(struct vf_priv_s* priv, double pts)
{
- do {
- snprintf (priv->fname, 100, "shot%04d.png", ++priv->frameno);
- } while (fexists(priv->fname) && priv->frameno < 100000);
- if (fexists(priv->fname)) {
- priv->fname[0] = '\0';
- return;
+ char *base = "shot";
+ char tstamp[TIMESTAMP_LENGTH];
+ char ts_sep;
+ uint32_t hour = (uint32_t) (pts/3600);
+ uint32_t min = (uint32_t) (pts/60) % 60;
+ uint32_t sec = (uint32_t) pts % 60;
+ uint32_t msec = (uint32_t) (pts*100) % 100;
+ size_t n;
+
+ switch (priv->style) {
+ case 1:
+ case 2:
+ ts_sep = (priv->style == 1) ? ':' : '-';
+ /* TIMESTAMP_LENGTH is enough for 2^32 seconds */
+ snprintf(tstamp, TIMESTAMP_LENGTH,
+ "%02" PRIu32 "%c%02" PRIu32 "%c%02" PRIu32 ".%02" PRIu32,
+ hour, ts_sep, min, ts_sep, sec, msec);
+
+ if (priv->basename)
+ base = priv->basename;
+ else {
+ if (strstr(filename, "://"))
+ base = "shot";
+ else {
+ base = strrchr(filename, '/');
+ if (base == NULL)
+ base = filename;
+ else
+ base++;
+ }
+ }
+
+ /* 8 is length of ".[].png" plus '\0' */
+ n = strlen(base) + strlen(tstamp) + 8;
+ priv->fname = malloc(n);
+ if (!priv->fname) {
+ mp_msg(MSGT_VFILTER,MSGL_ERR,"Unable to allocate memory in vf_screenshot.c\n");
+ return;
+ }
+ snprintf(priv->fname, n, "%s.[%s].png", base, tstamp);
+ break;
+ default:
+ case 0:
+ priv->fname = malloc(SHOT_FNAME_LENGTH);
+ if (!priv->fname) {
+ mp_msg(MSGT_VFILTER,MSGL_ERR,"Unable to allocate memory in vf_screenshot.c\n");
+ return;
+ }
+ do {
+ snprintf (priv->fname, SHOT_FNAME_LENGTH, "shot%04d.png", ++priv->frameno);
+ } while (fexists(priv->fname) && priv->frameno < 100000);
+ if (fexists(priv->fname)) {
+ free(priv->fname);
+ priv->fname = NULL;
+ return;
+ }
+ break;
}
mp_msg(MSGT_VFILTER,MSGL_INFO,"*** screenshot '%s' ***\n",priv->fname);
@@ -196,11 +257,13 @@
if(vf->priv->shot) {
if (vf->priv->shot==1)
vf->priv->shot=0;
- gen_fname(vf->priv);
- if (vf->priv->fname[0]) {
+ gen_fname(vf->priv, pts);
+ if (vf->priv->fname) {
if (!vf->priv->store_slices)
scale_image(vf->priv, dmpi);
write_png(vf->priv);
+ free(vf->priv->fname);
+ vf->priv->fname = NULL;
}
vf->priv->store_slices = 0;
}
@@ -263,6 +326,8 @@
av_freep(&vf->priv->avctx);
if(vf->priv->ctx) sws_freeContext(vf->priv->ctx);
if (vf->priv->buffer) free(vf->priv->buffer);
+ if (vf->priv->fname) free(vf->priv->fname);
+ if (vf->priv->basename) free(vf->priv->basename);
free(vf->priv->outbuffer);
free(vf->priv);
}
@@ -278,7 +343,7 @@
vf->draw_slice=draw_slice;
vf->get_image=get_image;
vf->uninit=uninit;
- vf->priv=malloc(sizeof(struct vf_priv_s));
+ if (!vf->priv) vf->priv = calloc(1, sizeof(struct vf_priv_s));
vf->priv->frameno=0;
vf->priv->shot=0;
vf->priv->store_slices=0;
@@ -294,14 +359,27 @@
return 1;
}
+#define ST_OFF(f) M_ST_OFF(struct vf_priv_s,f)
+static const m_option_t vf_opts_fields[] = {
+ {"style", ST_OFF(style), CONF_TYPE_INT, 0, 0, 2, NULL},
+ {"basename", ST_OFF(basename), CONF_TYPE_STRING, 0, M_OPT_MIN, M_OPT_MAX, NULL},
+ {NULL, NULL, 0, 0, 0, 0, NULL}
+};
+static const m_struct_t vf_opts = {
+ "screenshot",
+ sizeof(struct vf_priv_s),
+ &vf_priv_dflt,
+ vf_opts_fields
+};
+
const vf_info_t vf_info_screenshot = {
"screenshot to file",
"screenshot",
"A'rpi, Jindrich Makovicka",
"",
screenshot_open,
- NULL
+ &vf_opts
};
//===========================================================================//
Index: DOCS/man/en/mplayer.1
===================================================================
--- DOCS/man/en/mplayer.1 (revision 28346)
+++ DOCS/man/en/mplayer.1 (working copy)
@@ -7180,15 +7180,28 @@
.RE
.
.TP
-.B screenshot
+.B screenshot[=style:basename]
Allows acquiring screenshots of the movie using slave mode
commands that can be bound to keypresses.
See the slave mode documentation and the INTERACTIVE CONTROL
section for details.
-Files named 'shotNNNN.png' will be saved in the working directory,
-using the first available number \- no files will be overwritten.
The filter has no overhead when not used and accepts an arbitrary
colorspace, so it is safe to add it to the configuration file.
+.RSs
+.IPs <style>
+0: Files named 'shotNNNN.png' will be saved in the working directory,
+using the first available number \- no files will be overwritten.
+.br
+1: Files named 'basename.[hh:mm:ss.ms].png' will be saved in the
+working directory.
+.br
+2: Files named 'basename.[hh-mm-ss.ms].png' will be saved in the
+working directory. Use this if your environment doesn't allow colon
+in filenames.
+.IPs <basename>
+Basename for a screenshot file. If not set and mplayer is playing a file,
+the file name will be used. If not set and mplayer is playing a stream,
+"shot" will be used.
.RE
.
.TP
Index: mencoder.c
===================================================================
--- mencoder.c (revision 28346)
+++ mencoder.c (working copy)
@@ -115,6 +115,7 @@
char* audio_lang=NULL;
char* dvdsub_lang=NULL;
static char* spudec_ifo=NULL;
+char* filename=NULL;
static char** audio_codec_list=NULL; // override audio codec
static char** video_codec_list=NULL; // override video codec
@@ -397,7 +398,6 @@
double v_timer_corr=0;
m_entry_t* filelist = NULL;
-char* filename=NULL;
int decoded_frameno=0;
int next_frameno=-1;
Tuesday, November 18, 2008
pam_setquota, pam_kill
Being network administrator at MSU dorm two years ago, I made a public ssh server. Users were presented in mysql database on another server, so I used pam_mysql and libnss_mysql, which were already existed, for my public ssh server. I also wanted to set disk quota automatically for each user, but linux setquota(8) doesn't allow you to edit quota for non-existing user. Nor did work any pam_setquotas I found. So I wrote a one myself.
Of course, I edited limits.conf. But it didn't save from stupid cpu-intensive "while(1);" programs some nasty users had left running. I decided to kill every user's process, if he/she is no longer logged on the system, and wrote pam_kill for that.
Today, to share my old code, I created two Google Code projects: pam-setquota and pam-kill. You can access them via my Google Code profile.
Of course, I edited limits.conf. But it didn't save from stupid cpu-intensive "while(1);" programs some nasty users had left running. I decided to kill every user's process, if he/she is no longer logged on the system, and wrote pam_kill for that.
Today, to share my old code, I created two Google Code projects: pam-setquota and pam-kill. You can access them via my Google Code profile.
Saturday, November 15, 2008
crc32
Long time passed since my last play with crc32. I wanted to learn and benchmark zlib's implementation as well (it's quite complex in comparison to basic ones), but it seems it will stuck in my todo list for ages. So, I decided to write what I know right now.
This is a very simple implementation taken from rhash (rhash.sourceforge.net). Every article about crc32 describes the code below, it is not something outstanding from rhash.
And this is an x86 asm code, produced by gcc 4.1.2 from the 'for' loop:
One of my friends found a bit faster implementation written in inline asm and used it for his hash checker ArXSum. I will not give here his code, because my optimization of rhash code is even faster. ArXSum just gave me an idea to use 8-bit registers to get rid of the andl instruction.
First, I enforced gcc to use 8-bit register. I hoped it would be enough, but it won't.
The code produced is
Do you see that utterly useless second movzbl? My final optimization was just to remove it and to add "xorl %eax,%eax" before the loop (that would be "m = 0" which had been lost by gcc). Newest version of gcc also produces the same code.
I still want to carefully look into zlib one day and to compare their high-level optimization with mine. I will eventually post about it.
This is a very simple implementation taken from rhash (rhash.sourceforge.net). Every article about crc32 describes the code below, it is not something outstanding from rhash.
unsigned get_crc32(unsigned crcinit, const char *p, int len) {
register unsigned crc;
const char *e = p + len;
for(crc=crcinit^0xFFFFFFFF; p<e; p++)
crc = crcTable[(crc^ *p) & 0xFF] ^ (crc >> 8);
return( crc^0xFFFFFFFF );
}
And this is an x86 asm code, produced by gcc 4.1.2 from the 'for' loop:
.L11:
movsbl (%ecx,%esi),%eax
incl %ecx
xorl %edx, %eax
andl $255, %eax
shrl $8, %edx
xorl crcTable(,%eax,4), %edx
cmpl %ebx, %ecx
jne .L11
One of my friends found a bit faster implementation written in inline asm and used it for his hash checker ArXSum. I will not give here his code, because my optimization of rhash code is even faster. ArXSum just gave me an idea to use 8-bit registers to get rid of the andl instruction.
First, I enforced gcc to use 8-bit register. I hoped it would be enough, but it won't.
unsigned get_crc32(unsigned crcinit, const char *p, int len) {
register unsigned crc;
unsigned char m;
const char *e = p + len;
m = 0;
for(crc=crcinit^0xFFFFFFFF; p<e; p++) {
m = (crc^ *p);
crc = crcTable[m] ^ (crc >> 8);
}
return( crc^0xFFFFFFFF );
}
The code produced is
.L11:
movzbl (%ecx,%esi), %eax
incl %ecx
xorb %dl, %al
movzbl %al, %eax
shrl $8, %edx
xorl crcTable(,%eax,4), %edx
cmpl %ebx, %ecx
jne .L11
Do you see that utterly useless second movzbl? My final optimization was just to remove it and to add "xorl %eax,%eax" before the loop (that would be "m = 0" which had been lost by gcc). Newest version of gcc also produces the same code.
I still want to carefully look into zlib one day and to compare their high-level optimization with mine. I will eventually post about it.
Monday, October 29, 2007
gcc optimizer
I found that the following code causes segfault when compiled with -O2. Compiler:
gcc (GCC) 4.1.2 20061115 (prerelease) (Debian 4.1.1-21)
gcc (GCC) 4.1.2 20061115 (prerelease) (Debian 4.1.1-21)
#include <stdlib.h>
struct prime {
struct prime *next;
};
int main(){
struct prime primes = {.next=NULL};
struct prime *p = ℙ
while (p->next != NULL){
p = p->next;
}
p->next = (struct prime*) malloc (sizeof(struct prime));
return 0;
}
Subscribe to:
Posts (Atom)