Emacs (shortened name from "Editor Macros") has the fastest Regular Expression engine in the world--when you compare the engines that are programmed to find and display character strings AS YOU TYPE THEM.

So, just hoping you keep that in mind: As far as editing documents and searching documents and in some cases replacing strings, there is nothing faster than Emacs and its native regular expression engine, which is built for editing tasks--editing tasks that are especially related to and programmed for searching strings and/or regular expressions as you type them in

In many other ways, of course other engines are faster; but, not for editing and searching and replacing tasks

And even when you talk about editing multi-gigabyte and even multi-terabyte files--suggest you look into and try out vlf-mode (i.e. "Very Large File Mode") for that, just for the fun and excitement of it, if for nothing else.

So, again, GNU Emacs is by far the world's most powerful editor, and it has been for many, many years--there is no need for 3rd party tools, maybe there's a need to investigate the "engines under the hood" and why they work the way they do.

On Tue, Nov 12, 2019 at 8:04 AM Russell Adams <RLAdams@adamsinfoserv.com> wrote:
To further explain my setup, I have three libraries of files Personal, Technical
and Business. Personal is all personal data including Org files, Technical is
all whitepapers and vendor documentation, and Business is Org projects and other
matters. Recoll is used to search all of them.

In my shell profile I have a few functions to access each library, and to file
away new documents (ie: I downloaded a whitepaper, and just want to slap it into
a unique directory in the library).

#+BEGIN_EXAMPLE
  # For recoll and library
  func _FileRecoll()  { DEST="$HOME/Library/$1/$(date +%Y/%m/%d)" ; mkdir -p $DEST ; mv -i "$2" $DEST ; }
  func FileTech()     { _FileRecoll "Technical" "$1" ; }
  func FilePersonal() { _FileRecoll "Personal"  "$1" ; }
  func FileBiz()      { _FileRecoll "Business"  "$1" ; }

  func recollt() { RECOLL_CONFDIR=~/Library/.recoll-Technical ~/scripts/recolltui.sh $@ ; }
  func recollp() { RECOLL_CONFDIR=~/Library/.recoll-Personal  ~/scripts/recolltui.sh $@ ; }
  func recollb() { RECOLL_CONFDIR=~/Library/.recoll-Business  ~/scripts/recolltui.sh $@ ; }
#+END_EXAMPLE

I have a daily cronjob to index those directories:

#+BEGIN_EXAMPLE
  # Recoll
  00 2  * * * /usr/bin/recollindex -c ${HOME}/Library/.recoll-Personal/  >> "${HOME}/Library/.recoll-Personal/recollindex.log" 2>&1
  00 3  * * * /usr/bin/recollindex -c ${HOME}/Library/.recoll-Technical/ >> "${HOME}/Library/.recoll-Technical/recollindex.log" 2>&1
  00 4  * * * /usr/bin/recollindex -c ${HOME}/Library/.recoll-Business/  >> "${HOME}/Library/.recoll-Business/recollindex.log" 2>&1
#+END_EXAMPLE

Then I have a simple TUI shell script which wraps dialog around recoll's
CLI. This puts the filename in my clip board for command line pasting, and opens
PDFs in Firefox.

#+BEGIN_EXAMPLE
  #!/bin/sh
  # ~/scripts/recolltui.sh

  # requires recollq optional cli binary to be present from recoll package
  # uses base64, xsel, and dialog

  DB=$(mktemp)
  MENU=$(mktemp)
  trap 'rm -f -- "${DB}" "${MENU}"' INT TERM HUP EXIT

  # Make sure to customize RECOLL_CONFDIR (ie: ~/Library/.recoll-Technical) if needed

  # query recoll, save the base64 output to $DB as 3 space separated columns: row #, title, url
  recollq -e -F "title url" $@ 2>/dev/null | nl > $DB

  # copy header into menu
  head -n 2 $DB | while read num rest ; do
      echo "= \"$rest\"" >> $MENU
  done

  # Convert results to dialog menu using row # and title + filename as list item
  # skip first two lines of results, they are not base64
  tail -n +3 $DB | while read num title url ; do
      echo "$num \"$(echo "$title" | base64 -w0 -d ) : $(basename "$(echo "$url" | base64 -w0 -d | sed 's,file://,,g')")\"" >> $MENU
  done

  # ask the user which results to view
  SEL=$(dialog --menu "Search results" 0 0 0 --file $MENU --stdout)

  # if a choice was made, open the url in firefox AND copy it to the clipboard
  [ $? -eq 0 ] && {
      URL="$(awk "\$1 == $SEL  {print \$3}" $DB | base64 -w0 -d)"
      echo "$URL" | sed 's,file://,,g' | xsel
      firefox "$URL"
  }

#+END_EXAMPLE

I've often thought that the dialog script could be easily replaced by an Emacs
interface, but I haven't taken the time to try to write one.

I've found that recoll's indexing in Xapian is excellent. I frequently can find
my search terms in technical documentation very rapidly. The support of many
file types makes it index well. I think my most frequent formats are text
including Org, PDF, and DOC.

I used to have a "Scrapbook" extension in Firefox which would instantly save a
webpage being viewed into my Personal library. Unfortunately that isn't
supported on modern Firefox versions so I need to find a replacement for that
functionality.

On Tue, Nov 12, 2019 at 12:34:29PM +0100, Roland Everaert wrote:
> I had a quick look at the recoll and I notice that there is a python API
> to update/create index.
>
> Maybe something could be developped using the python package recently
> released by Karl Voit, to feed a recoll index with org data.
>
> Roland.


------------------------------------------------------------------
Russell Adams                            RLAdams@AdamsInfoServ.com

PGP Key ID:     0x1160DCB3           http://www.adamsinfoserv.com/

Fingerprint:    1723 D8CA 4280 1EC9 557F  66E8 1154 E018 1160 DCB3