From: Ihor Radchenko <yantar92@gmail.com>
To: Maxim Nikulin <manikulin@gmail.com>, emacs-orgmode@gnu.org
Subject: Re: Yet another browser extension for capturing notes - LinkRemark
Date: Sat, 26 Dec 2020 21:49:41 +0800 [thread overview]
Message-ID: <87sg7spthm.fsf@localhost> (raw)
In-Reply-To: <rs7800$838$1@ciao.gmane.io>
Maxim Nikulin <manikulin@gmail.com> writes:
> I just inspected pages on several sites using developer tools and added
> code that handles noticed elements.
I see. I basically did the same, except some minimal support for
OpenGraph (though I stopped when I saw that even YouTube is not
following the standard, except the most basic fields).
> The only force to add some formal data is "share" buttons. Maybe some
> guides for web developers from social networks or search engines could
> be more useful than formal references, but I have not had a closer
> look.
It is also consistent with what I saw. <meta .. twitter:..> fields seems
to be very common.
>> Also, org-capture-ref does not really force the user to put BiBTeX into
>> the capture. Individual metadata fields are available using
>> org-capture-ref-get-bibtex-field (which extracts data from internal
>> alist structure). It's just that I mostly had BiBTeX in mind (with
>> distant goal of supporting export to LaTeX) for my use-cases.
>
> I do not have clear vision how to use collected data for queries.
> Certainly I want to have more human-friendly representation than BibTeX
> entries (maybe in addition to machine-parsable data) adjacent to my notes.
So far, I found author, website name, publication year, title, and
resource type useful. My standard capture template for links is:
* <Author> [<Website>] (<Year>) Title
Example:
* dash-docs-el [Github] Dash-Docs-El Helm-Dash: Browse Dash Docsets Inside Emacs
Such headlines can be easily searched later, especially when I also add
some #keywords manually.
> Personally, I would prefer to avoid http queries from Emacs. Sometimes
> it is better to have current DOM state, not page source, that is why I
> decided to gather data inside browser, despite security fences that are
> placed quite strangely in some cases.
Completely agree here. That's why I directly reuse the current DOM state
from qutebrowser in my own setup. However, extension for qutebrowser was
easy to write for me as it can be simply a bash script. I know nothing
about Firefox/Chrome extensions and I do not know javascript.
On the other hand, having an ability to get html is still useful in my
case (Emacs package) when the capture is not done from browser. For
example, I often capture links from elfeed - http query from Emacs is
useful then.
> From my point of view, you should be happy with any of projects you
> mentioned below. Are all of them have some problems critical for you?
They are all javascript, except one (unicontent), which can be easily
replaced with built-in Elisp libraries (dom.el).
>> Finally, would you be interested to join efforts on metadata parsing?
>
> Could you, please, share a bit more details on your ideas?
> Technically it should be possible to push e.g. raw
> document.head.innerHtml to any external metadata parser using native
> messaging (to deal with sites requiring authorization). However it could
> cause an alarm during review before publication of the extension to the
> browser catalogues.
That's unfortunate. Pushing raw html/dom is what I had in mind when
talking about joining efforts.
Another idea would be providing a callback from elisp to browser (I am
not sure if it is possible). org-capture-ref has a mechanism to check if
the link was captured in the past. If the link is already captured, the
information about the link location and todo-state can be messaged back
to the browser.
Example message (only qutebrowser is supported now):
Bookmark not saved!
Already captured into org-capture-ref:TODO maxnikulin [Github] linkremark: LinkRemark - page or link notes with context
>There is some room for improvement, but I do not think that quality of
> metadata for ordinary sites could be dramatically better. The case
> that is not handled it all is scientific publications, unfortunately
> currently I have quite little interest in it. Definitely results
> should be stored in some structured format such as BibTeX. I have seen
> huge <head> elements describing even all references. Certainly such
> lists are not for general-purpose notes (at least without explicit
> request from the user), they should be handled by some bibliography
> software to display citation graphs in the local library. On the other
> hand it is not a problem to feed such data to some tool using native
> messaging protocol. I have no idea if various publisher provide such
> data in a uniform way, I just hope that pressure from citation indices
> and bibliography management software has positive influence on
> standardization.
I think https://github.com/microlinkhq/metascraper#core-rules can be
used for ideas. It has generic parsing apart from site-specific rules.
For the scientific publications, the key point is usually getting
DOI/ISBN. Then, most of the metadata can be obtained using standard API
of doi.org or various ISBN databases. In addition, reference data is
generally available in OpenCitations.net (they also have all kinds of
web APIs).
Also, do you pass any of the parsed metadata to org-protocol? If you do,
it would be trivial to get it into capture templates on Elisp (and
org-capture-ref) side.
> I am not going to blow up the code with recipes for particular sites.
> However I realize that some special cases still should be handled. I am
> not ready to adapt user script model used by
> Greasemonkey/Violentmonkey/Tampermonkey. I believe, it is better to
> create dedicated extension(s) that either adds and overwrites existing
> meta elements or allows to query gathered data using sendMessage
> webextensions interface. By the way, scripts for above mentioned
> extensions could be used as well. It should alleviate cases when some
> site with insane metadata is important for particular user.
I see. This is another point I thought it could be worth collaborating.
The parser rules just need to be written once (probably in some common
format, like json) and then can be reused.
> For some reason I even did not tried to
> find existing projects for metadata extraction. Maybe I still hope that
> quite simple implementation could handle most of the cases.
Actually, simple parsing does fairly good job on most of websites. It's
just that it is not ideal. For example, I tweaked title of captured
github issues to include "issue#", which helps to distinguish such pages
from individual repo bookmarks. I believe that such adjustments should
be available for the users, which was where org-capture-ref code started
from.
Best,
Ihor
next prev parent reply other threads:[~2020-12-26 13:46 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-12-25 12:44 Yet another browser extension for capturing notes - LinkRemark Maxim Nikulin
2020-12-25 14:19 ` Ihor Radchenko
2020-12-26 11:49 ` Maxim Nikulin
2020-12-26 13:49 ` Ihor Radchenko [this message]
2020-12-27 12:18 ` Maxim Nikulin
2021-11-18 17:01 ` LinkRemark Firefox extension approved for addons.mozilla.org Max Nikulin
2020-12-25 14:26 ` Yet another browser extension for capturing notes - LinkRemark Russell Adams
2020-12-25 22:11 ` Samuel Wales
2020-12-26 9:16 ` Maxim Nikulin
2022-01-17 2:29 ` Samuel Wales
2022-01-18 1:03 ` Samuel Wales
2022-01-18 5:43 ` Samuel Banya
2022-01-18 10:57 ` Max Nikulin
2022-01-18 10:34 ` Max Nikulin
2022-01-19 3:28 ` Ihor Radchenko
2022-01-19 8:45 ` András Simonyi
2022-01-19 10:00 ` Ihor Radchenko
2022-01-19 10:58 ` András Simonyi
2022-01-19 11:42 ` Ihor Radchenko
2022-01-20 0:23 ` Samuel Wales
2022-01-20 12:16 ` Org mode and firefox tabs (feature request) Max Nikulin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://www.orgmode.org/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87sg7spthm.fsf@localhost \
--to=yantar92@gmail.com \
--cc=emacs-orgmode@gnu.org \
--cc=manikulin@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
Code repositories for project(s) associated with this public inbox
https://git.savannah.gnu.org/cgit/emacs/org-mode.git
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).