emacs-orgmode@gnu.org archives
 help / color / mirror / code / Atom feed
From: Ihor Radchenko <yantar92@gmail.com>
To: Max Nikulin <manikulin@gmail.com>
Cc: emacs-orgmode@gnu.org
Subject: Re: profiling latency in large org-mode buffers (under both main & org-fold feature)
Date: Wed, 02 Mar 2022 23:12:20 +0800	[thread overview]
Message-ID: <87fso0xuaz.fsf@localhost> (raw)
In-Reply-To: <svnnj8$pl3$1@ciao.gmane.io>

Max Nikulin <manikulin@gmail.com> writes:

> On 27/02/2022 13:43, Ihor Radchenko wrote:
>> 
>> Now, I did an extended profiling of what is happening using perf:
>> 
>>       6.20%   [.] buf_bytepos_to_charpos
>
> Maybe I am interpreting such results wrongly, but it does not look like 
> a bottleneck. Anyway thank you very much for such efforts, however it is 
> unlikely that I will join to profiling in near future.

The perf data I provided is a bit tricky. I recorded statistics over the
whole Emacs session + used fairly small number of iterations in your
benchmark code.

Now, I repeated the testing plugging perf to Emacs only during the
benchmark execution:

With refile cache and markers:
    22.82%  emacs-29.0.50.1  emacs-29.0.50.1                       [.] buf_bytepos_to_charpos
    16.68%  emacs-29.0.50.1  emacs-29.0.50.1                       [.] rpl_re_search_2
     8.02%  emacs-29.0.50.1  emacs-29.0.50.1                       [.] re_match_2_internal
     6.93%  emacs-29.0.50.1  emacs-29.0.50.1                       [.] Fmemq
     4.05%  emacs-29.0.50.1  emacs-29.0.50.1                       [.] allocate_vectorlike
     1.88%  emacs-29.0.50.1  emacs-29.0.50.1                       [.] mark_object

Without refile cache:
    17.25%  emacs-29.0.50.1  emacs-29.0.50.1                         [.] rpl_re_search_2
    15.84%  emacs-29.0.50.1  emacs-29.0.50.1                         [.] buf_bytepos_to_charpos
     8.89%  emacs-29.0.50.1  emacs-29.0.50.1                         [.] re_match_2_internal
     8.00%  emacs-29.0.50.1  emacs-29.0.50.1                         [.] Fmemq
     4.35%  emacs-29.0.50.1  emacs-29.0.50.1                         [.] allocate_vectorlike
     2.01%  emacs-29.0.50.1  emacs-29.0.50.1                         [.] mark_object

Percents should be adjusted for larger execution time in the first
dataset, but otherwise it is clear that buf_bytepos_to_charpos dominates
the time delta.

>> I am not sure if I understand the code correctly, but that loop is
>> clearly scaling performance with the number of markers
>
> I may be terribly wrong, but it looks like an optimization attempt that 
> may actually ruin performance. My guess is the following. Due to 
> multibyte characters position in buffer counted in characters may 
> significantly differ from index in byte sequence. Since markers have 
> both values bytepos and charpos, they are used (when available) to 
> narrow down initial estimation interval [0, buffer size) to nearest 
> existing markers. The code below even creates temporary markers to make 
> next call of the function faster.

I tend to agree after reading the code again.
I tried to play around with that marker loop. It seems that the loop
should not be mindlessly disabled, but it can be sufficient to check
only a small number of markers in front of the marker list. The cached
temporary markers are always added in front of the list.

Limiting the number of checked markers to 10, I got the following
result:

With threshold and refile cache:
| 9.5.2                  |                    |   |                    |
| nm-tst                 |       28.060029337 | 4 | 1.8427608629999996 |
| org-refile-get-targets | 3.2445615439999997 | 0 |                0.0 |
| nm-tst                 | 33.648259137000004 | 4 | 1.2304310540000003 |
| org-refile-cache-clear |        0.034879062 | 0 |                0.0 |
| nm-tst                 |       23.974124596 | 5 | 1.4291488149999996 |

Markers add +~5.6sec.

Original Emacs code and refile cache:
| 9.5.2                  |                      |   |                    |
| nm-tst                 |         29.494383528 | 4 | 3.0368508530000002 |
| org-refile-get-targets |          3.635947646 | 1 | 0.4542479730000002 |
| nm-tst                 |         36.537926593 | 4 | 1.1297576349999998 |
| org-refile-cache-clear | 0.009665364999999999 | 0 |                0.0 |
| nm-tst                 |         23.283457105 | 4 | 1.0536496499999997 |

Markers add +7sec.

The improvement is there, though markers still somehow come into play. I
speculate that limiting the number of checked markers might also force
adding extra temporary markers to the list, but I haven't looked into
that possibility for now. It might be better to discuss with emacs-devel
before trying too hard.

>> Finally, FYI. I plan to work on an alternative mechanism to access Org
>> headings - generic Org query library. It will not use markers and
>> implement ideas from org-ql. org-refile will eventually use that generic
>> library instead of current mechanism.
>
> I suppose that markers might be implemented in an efficient way, and 
> much better performance may be achieved when low-level data structures 
> are accessible. I am in doubts concerning attempts to create something 
> that resembles markers but based purely on high-level API.

I am currently using a custom version of org-ql utilising the new
element cache. It is substantially faster compared to current
org-refile-get-targets. The org-ql version runs in <2 seconds at worst
when calculating all refile targets from scratch, while
org-refile-get-targets is over 10sec. org-ql version gives 0 noticeable
latency when there is an extra text query to narrow down the refile
targets. So, is it certainly possible to improve the performance just
using high-level org-element cache API + regexp search without markers.

Note that we already have something resembling markers on high-level
API. It is what org element cache is doing - on every user edit, it
re-calculates the Org element boundaries (note that Nicolas did not use
markers to store boundaries of org elements). The merged headline
support by org-element cache is the first stage of my initial plan to
speed up searching staff in Org - be it agenda items, IDs, or refile
targets.

Best,
Ihor


  reply	other threads:[~2022-03-02 15:13 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-21 21:06 profiling latency in large org-mode buffers (under both main & org-fold feature) Matt Price
2022-02-21 22:22 ` Samuel Wales
2022-02-22  5:33   ` Ihor Radchenko
2022-02-22  5:44     ` Kaushal Modi
     [not found]       ` <CAN_Dec8kW5hQoa0xr7sszafYJJNmGipX0DA94DKNh11DWjce8g@mail.gmail.com>
2022-02-23  2:41         ` Matt Price
2022-02-23  5:22           ` Ihor Radchenko
2022-02-23 14:47             ` Matt Price
2022-02-23 15:10               ` Ihor Radchenko
2022-02-22 21:11     ` Rudolf Adamkovič
2022-02-23 12:37       ` Org mode profiling meetup on Sat, Feb 26 (was: profiling latency in large org-mode buffers (under both main & org-fold feature)) Ihor Radchenko
2022-02-23 16:43         ` Kaushal Modi
2022-02-25 14:30         ` Ihor Radchenko
2022-02-26 12:04           ` Ihor Radchenko
2022-02-26 12:51             ` Ihor Radchenko
2022-02-26 15:51               ` Quiliro Ordóñez
2022-03-23 10:57                 ` #2 Org mode profiling meetup on Sat, Mar 26 (was: Org mode profiling meetup on Sat, Feb 26 (was: profiling latency in large org-mode buffers (under both main & org-fold feature))) Ihor Radchenko
2022-03-24 11:17                   ` Ihor Radchenko
2022-03-24 11:27                   ` Bruce D'Arcus
2022-03-24 13:43                     ` Matt Price
2022-03-24 13:49                     ` Ihor Radchenko
2022-03-26 11:59                   ` Ihor Radchenko
2022-03-27  8:14                     ` Ihor Radchenko
2022-04-21  8:05                   ` #3 Org mode profiling meetup on Sat, Apr 23 (was: #2 Org mode profiling meetup on Sat, Mar 26) Ihor Radchenko
2022-04-23 12:08                     ` Ihor Radchenko
2022-04-24  4:27                       ` Ihor Radchenko
2022-02-27  7:41               ` Org mode profiling meetup on Sat, Feb 26 (was: profiling latency in large org-mode buffers (under both main & org-fold feature)) Ihor Radchenko
2022-02-23 16:03     ` profiling latency in large org-mode buffers (under both main & org-fold feature) Max Nikulin
2022-02-23 16:35       ` Ihor Radchenko
2022-02-25 12:38         ` Max Nikulin
2022-02-26  7:45           ` Ihor Radchenko
2022-02-26 12:45             ` Max Nikulin
2022-02-27  6:43               ` Ihor Radchenko
2022-03-02 12:23                 ` Max Nikulin
2022-03-02 15:12                   ` Ihor Radchenko [this message]
2022-03-03 14:56                     ` Max Nikulin
2022-03-19  8:49                       ` Ihor Radchenko
2022-02-26 15:07     ` Jean Louis
2022-02-23  2:39   ` Matt Price
2022-02-23  5:25     ` Ihor Radchenko
2022-02-22  5:30 ` Ihor Radchenko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://www.orgmode.org/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87fso0xuaz.fsf@localhost \
    --to=yantar92@gmail.com \
    --cc=emacs-orgmode@gnu.org \
    --cc=manikulin@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
Code repositories for project(s) associated with this public inbox

	https://git.savannah.gnu.org/cgit/emacs/org-mode.git

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).