[-- Attachment #1: Type: text/plain, Size: 3611 bytes --] I'm trying to figure out what causes high latency while typing in large org-mode files. The issue is very clearly a result of my large config file, but I'm not sure how to track it down with any precision. My main literate config file is ~/.emacs.d/emacs-init.org, currently 15000 lines, 260 src blocks. If I create a ~minimal.el~ config like this: (let* ((all-paths '("/home/matt/src/org-mode/emacs/site-lisp/org"))) (dolist (p all-paths) (add-to-list 'load-path p))) (require 'org) (find-file "~/.emacs.d/emacs-init.org") then I do not notice any latency while typing. If I run the profiler while using the minimal config, the profile looks about like this at a high level: 1397 71% - command-execute 740 37% - funcall-interactively 718 36% - org-self-insert-command 686 34% + org-element--cache-after-change 10 0% + org-fold-core--fix-folded-region 3 0% + blink-paren-post-self-insert-function 2 0% + jit-lock-after-change 1 0% org-fold-check-before-invisible-edit--text-properties 9 0% + previous-line 6 0% + minibuffer-complete 3 0% + org-return 3 0% + execute-extended-command 657 33% - byte-code 657 33% - read-extended-command 64 3% - completing-read-default 14 0% + redisplay_internal (C function) 1 0% + timer-event-handler 371 18% - redisplay_internal (C function) 251 12% + jit-lock-function 90 4% + assq 7 0% + substitute-command-keys 3 0% + eval 125 6% + timer-event-handler 69 3% + ... -------------------------- However, if I instead use my fairly extensive main config, latency is high enough that there's a noticeable delay while typing ordinary words. I see this regardless of whether I build from main or from Ihor's org-fold feature branch on github. The profiler overview here is pretty different -- redisplay_internal takes a much higher percentage of the CPU requirement: 3170 56% - redisplay_internal (C function) 693 12% - substitute-command-keys 417 7% + #<compiled -0x1c8b98a4b03336f3> 59 1% + assq 49 0% + org-in-subtree-not-table-p 36 0% + tab-bar-make-keymap 35 0% and 24 0% + not 16 0% org-at-table-p 13 0% + jit-lock-function 8 0% keymap-canonicalize 7 0% + #<compiled 0x74a551771c7fdf1> 4 0% + funcall 4 0% display-graphic-p 3 0% + #<compiled 0xe5940664f7881ee> 3 0% file-readable-p 3 0% + table--probe-cell 3 0% table--row-column-insertion-point-p 1486 26% - command-execute 1200 21% - byte-code 1200 21% - read-extended-command 1200 21% - completing-read-default 1200 21% - apply 1200 21% - vertico--advice 475 8% + #<subr completing-read-default> ---------------------- I've almost never used the profiler and am not quite sure how I should proceed to debug this. I realize I can comment out parts of the config one at a time, but that is not so easy for me to do in my current setup, and I suppose there are likely to be multiple contributing causes, which I may not really notice except in the aggregate. If anyone has suggestions, I would love to hear them! Thanks, Matt [-- Attachment #2: Type: text/html, Size: 4640 bytes --]
i have been dealing with latency also, often in undo-tree. this might be a dumb suggestion, but is it related to org file size? my files have not really grown /that/ much but maybe you could bisect one. as opposed to config. i am not saying that your org files are too big. just that maybe it could lead to insights. On 2/21/22, Matt Price <moptop99@gmail.com> wrote: > I'm trying to figure out what causes high latency while typing in large > org-mode files. The issue is very clearly a result of my large config > file, but I'm not sure how to track it down with any precision. > > My main literate config file is ~/.emacs.d/emacs-init.org, currently 15000 > lines, 260 src blocks. > If I create a ~minimal.el~ config like this: > > (let* ((all-paths > '("/home/matt/src/org-mode/emacs/site-lisp/org"))) > (dolist (p all-paths) > (add-to-list 'load-path p))) > > (require 'org) > (find-file "~/.emacs.d/emacs-init.org") > > then I do not notice any latency while typing. If I run the profiler while > using the minimal config, the profile looks about like this at a high > level: > > 1397 71% - command-execute > 740 37% - funcall-interactively > 718 36% - org-self-insert-command > 686 34% + org-element--cache-after-change > 10 0% + org-fold-core--fix-folded-region > 3 0% + blink-paren-post-self-insert-function > 2 0% + jit-lock-after-change > 1 0% > org-fold-check-before-invisible-edit--text-properties > 9 0% + previous-line > 6 0% + minibuffer-complete > 3 0% + org-return > 3 0% + execute-extended-command > 657 33% - byte-code > 657 33% - read-extended-command > 64 3% - completing-read-default > 14 0% + redisplay_internal (C function) > 1 0% + timer-event-handler > 371 18% - redisplay_internal (C function) > 251 12% + jit-lock-function > 90 4% + assq > 7 0% + substitute-command-keys > 3 0% + eval > 125 6% + timer-event-handler > 69 3% + ... > > -------------------------- > However, if I instead use my fairly extensive main config, latency is high > enough that there's a noticeable delay while typing ordinary words. I see > this regardless of whether I build from main or from Ihor's org-fold > feature branch on github. The profiler overview here is pretty different -- > redisplay_internal takes a much higher percentage of the CPU requirement: > > 3170 56% - redisplay_internal (C function) > 693 12% - substitute-command-keys > 417 7% + #<compiled -0x1c8b98a4b03336f3> > 59 1% + assq > 49 0% + org-in-subtree-not-table-p > 36 0% + tab-bar-make-keymap > 35 0% and > 24 0% + not > 16 0% org-at-table-p > 13 0% + jit-lock-function > 8 0% keymap-canonicalize > 7 0% + #<compiled 0x74a551771c7fdf1> > 4 0% + funcall > 4 0% display-graphic-p > 3 0% + #<compiled 0xe5940664f7881ee> > 3 0% file-readable-p > 3 0% + table--probe-cell > 3 0% table--row-column-insertion-point-p > 1486 26% - command-execute > 1200 21% - byte-code > 1200 21% - read-extended-command > 1200 21% - completing-read-default > 1200 21% - apply > 1200 21% - vertico--advice > 475 8% + #<subr completing-read-default> > > ---------------------- > I've almost never used the profiler and am not quite sure how I should > proceed to debug this. I realize I can comment out parts of the config one > at a time, but that is not so easy for me to do in my current setup, and I > suppose there are likely to be multiple contributing causes, which I may > not really notice except in the aggregate. > > If anyone has suggestions, I would love to hear them! > > Thanks, > > Matt > -- The Kafka Pandemic A blog about science, health, human rights, and misopathy: https://thekafkapandemic.blogspot.com
Matt Price <moptop99@gmail.com> writes: > However, if I instead use my fairly extensive main config, latency is high > enough that there's a noticeable delay while typing ordinary words. I see > this regardless of whether I build from main or from Ihor's org-fold > feature branch on github. The profiler overview here is pretty different -- > redisplay_internal takes a much higher percentage of the CPU requirement: > > 3170 56% - redisplay_internal (C function) > .... > 1200 21% - completing-read-default > 1200 21% - apply > 1200 21% - vertico--advice > 475 8% + #<subr completing-read-default> Judging from the profiler report, you did not collect enough number of CPU samples. I recommend to keep the profiler running for at least 10-30 seconds when trying to profile typing latency. Also, note that running M-x profiler-report second time will _not_ reproduce the previous report, but instead show CPU profiler report between the last invocation of profiler-report and the second one. I recommend to do the following: 1. M-x profiler-stop 2. M-x profiler-start 3. Do typing in the problematic Org file for 10-30 seconds 4. M-x profiler-report (once!) 5. Share the report here > I've almost never used the profiler and am not quite sure how I should > proceed to debug this. I realize I can comment out parts of the config one > at a time, but that is not so easy for me to do in my current setup, and I > suppose there are likely to be multiple contributing causes, which I may > not really notice except in the aggregate. The above steps should be the first thing to try and they will likely reveal the bottleneck. If not, you can go back to genetic bisection. I do not recommend manual commenting/uncommenting parts of you large config. Instead, you can try https://github.com/Malabarba/elisp-bug-hunter. But only if CPU profiling does not reveal anything useful. Best, Ihor
Samuel Wales <samologist@gmail.com> writes:
> i have been dealing with latency also, often in undo-tree. this might
> be a dumb suggestion, but is it related to org file size? my files
> have not really grown /that/ much but maybe you could bisect one. as
> opposed to config.
I am wondering if many people in the list experience latency issues.
Maybe we can organise an online meeting (jitsi or BBB) and collect the
common causes/ do online interactive debugging?
Best,
Ihor
[-- Attachment #1: Type: text/plain, Size: 427 bytes --] On Tue, Feb 22, 2022, 12:34 AM Ihor Radchenko <yantar92@gmail.com> wrote: > > I am wondering if many people in the list experience latency issues. > Maybe we can organise an online meeting (jitsi or BBB) and collect the > common causes/ do online interactive debugging? > +1 I have seen few people see this issue on the ox-hugo issue tracker: https://github.com/kaushalmodi/ox-hugo/discussions/551#discussioncomment-2104352 [-- Attachment #2: Type: text/html, Size: 1140 bytes --]
Ihor Radchenko <yantar92@gmail.com> writes:
> Samuel Wales <samologist@gmail.com> writes:
>
>> i have been dealing with latency also, often in undo-tree. this might
>> be a dumb suggestion, but is it related to org file size? my files
>> have not really grown /that/ much but maybe you could bisect one. as
>> opposed to config.
>
> I am wondering if many people in the list experience latency issues.
FYI: I experience high latency when typing near in-text citations, such
as [cite:@ganz+2013]. It got so bad that I converted all my files to
hard-wrapped lines. After I did that, the Org mode became usable again,
but it still lags visibly when typing near a citation.
Rudy
--
"'Contrariwise,' continued Tweedledee, 'if it was so, it might be; and
if it were so, it would be; but as it isn't, it ain't. That's logic.'"
-- Lewis Carroll, Through the Looking Glass, 1871/1872
Rudolf Adamkovič <salutis@me.com> [he/him]
Studenohorská 25
84103 Bratislava
Slovakia
[-- Attachment #1: Type: text/plain, Size: 4762 bytes --] Yes, it definitely seems to be related tofile size, which makes me think that some kind of buffer parsing is the cause of the problem. I'll replay in more detail to Ihor, down below! On Mon, Feb 21, 2022 at 5:22 PM Samuel Wales <samologist@gmail.com> wrote: > i have been dealing with latency also, often in undo-tree. this might > be a dumb suggestion, but is it related to org file size? my files > have not really grown /that/ much but maybe you could bisect one. as > opposed to config. > > i am not saying that your org files are too big. just that maybe it > could lead to insights. > > > On 2/21/22, Matt Price <moptop99@gmail.com> wrote: > > I'm trying to figure out what causes high latency while typing in large > > org-mode files. The issue is very clearly a result of my large config > > file, but I'm not sure how to track it down with any precision. > > > > My main literate config file is ~/.emacs.d/emacs-init.org, currently > 15000 > > lines, 260 src blocks. > > If I create a ~minimal.el~ config like this: > > > > (let* ((all-paths > > '("/home/matt/src/org-mode/emacs/site-lisp/org"))) > > (dolist (p all-paths) > > (add-to-list 'load-path p))) > > > > (require 'org) > > (find-file "~/.emacs.d/emacs-init.org") > > > > then I do not notice any latency while typing. If I run the profiler > while > > using the minimal config, the profile looks about like this at a high > > level: > > > > 1397 71% - command-execute > > 740 37% - funcall-interactively > > 718 36% - org-self-insert-command > > 686 34% + org-element--cache-after-change > > 10 0% + org-fold-core--fix-folded-region > > 3 0% + blink-paren-post-self-insert-function > > 2 0% + jit-lock-after-change > > 1 0% > > org-fold-check-before-invisible-edit--text-properties > > 9 0% + previous-line > > 6 0% + minibuffer-complete > > 3 0% + org-return > > 3 0% + execute-extended-command > > 657 33% - byte-code > > 657 33% - read-extended-command > > 64 3% - completing-read-default > > 14 0% + redisplay_internal (C function) > > 1 0% + timer-event-handler > > 371 18% - redisplay_internal (C function) > > 251 12% + jit-lock-function > > 90 4% + assq > > 7 0% + substitute-command-keys > > 3 0% + eval > > 125 6% + timer-event-handler > > 69 3% + ... > > > > -------------------------- > > However, if I instead use my fairly extensive main config, latency is > high > > enough that there's a noticeable delay while typing ordinary words. I see > > this regardless of whether I build from main or from Ihor's org-fold > > feature branch on github. The profiler overview here is pretty different > -- > > redisplay_internal takes a much higher percentage of the CPU requirement: > > > > 3170 56% - redisplay_internal (C function) > > 693 12% - substitute-command-keys > > 417 7% + #<compiled -0x1c8b98a4b03336f3> > > 59 1% + assq > > 49 0% + org-in-subtree-not-table-p > > 36 0% + tab-bar-make-keymap > > 35 0% and > > 24 0% + not > > 16 0% org-at-table-p > > 13 0% + jit-lock-function > > 8 0% keymap-canonicalize > > 7 0% + #<compiled 0x74a551771c7fdf1> > > 4 0% + funcall > > 4 0% display-graphic-p > > 3 0% + #<compiled 0xe5940664f7881ee> > > 3 0% file-readable-p > > 3 0% + table--probe-cell > > 3 0% table--row-column-insertion-point-p > > 1486 26% - command-execute > > 1200 21% - byte-code > > 1200 21% - read-extended-command > > 1200 21% - completing-read-default > > 1200 21% - apply > > 1200 21% - vertico--advice > > 475 8% + #<subr completing-read-default> > > > > ---------------------- > > I've almost never used the profiler and am not quite sure how I should > > proceed to debug this. I realize I can comment out parts of the config > one > > at a time, but that is not so easy for me to do in my current setup, and > I > > suppose there are likely to be multiple contributing causes, which I may > > not really notice except in the aggregate. > > > > If anyone has suggestions, I would love to hear them! > > > > Thanks, > > > > Matt > > > > > -- > The Kafka Pandemic > > A blog about science, health, human rights, and misopathy: > https://thekafkapandemic.blogspot.com > [-- Attachment #2: Type: text/html, Size: 6462 bytes --]
[-- Attachment #1: Type: text/plain, Size: 9082 bytes --] sorry everyone, I accidentally sent this to Kaushal this morning, and then took quite a while to get back to a computer after he let me know my mistake! On Tue, Feb 22, 2022 at 10:12 AM Matt Price <moptop99@gmail.com> wrote: > > On Tue, Feb 22, 2022 at 12:45 AM Kaushal Modi <kaushal.modi@gmail.com> > wrote: > >> >> >> On Tue, Feb 22, 2022, 12:34 AM Ihor Radchenko <yantar92@gmail.com> wrote: >> >>> >>> I am wondering if many people in the list experience latency issues. >>> Maybe we can organise an online meeting (jitsi or BBB) and collect the >>> common causes/ do online interactive debugging? >>> >> >> +1 >> >> I have seen few people see this issue on the ox-hugo issue tracker: >> https://github.com/kaushalmodi/ox-hugo/discussions/551#discussioncomment-2104352 >> > > > I htink it's a great idea, Ihor! > > Meanwhile, I have a profile report. I had a little trouble getting the > slowness to return (of course) but, subjectively, it seemed to get worse > (subjectively slower, and the laptop fan started up b/c of high cpu usage) > when I created and entered a src block. Apologies for the long paste: > > 45707 70% - redisplay_internal (C function) > 8468 13% - substitute-command-keys > 6111 9% - #<compiled -0x1c8c1b294a898af3> > 943 1% - kill-buffer > 708 1% - replace-buffer-in-windows > 614 0% - unrecord-window-buffer > 515 0% - assq-delete-all > 142 0% assoc-delete-all > 3 0% delete-char > 8060 12% - assq > 2598 4% - org-context > 15 0% org-inside-LaTeX-fragment-p > 12 0% - org-in-src-block-p > 12 0% - org-element-at-point > 9 0% - org-element--cache-verify-element > 9 0% org-element--parse-to > 3 0% org-element--parse-to > 8 0% - org-at-timestamp-p > 8 0% org-in-regexp > 642 0% + tab-bar-make-keymap > 309 0% + and > 270 0% + org-in-subtree-not-table-p > 196 0% + not > 163 0% + jit-lock-function > 115 0% + org-entry-get > 96 0% keymap-canonicalize > 56 0% org-at-table-p > 52 0% + #<compiled -0x16b737fc61e8f6c2> > 48 0% + #<compiled 0xf76e59543b881ee> > 43 0% table--row-column-insertion-point-p > 29 0% org-inside-LaTeX-fragment-p > 27 0% + menu-bar-positive-p > 26 0% + eval > 24 0% file-readable-p > 21 0% + funcall > 16 0% + imenu-update-menubar > 14 0% + vc-menu-map-filter > 13 0% + table--probe-cell > 12 0% + or > 11 0% + let > 11 0% + org-at-timestamp-p > 10 0% + flycheck-overlays-at > 7 0% undo-tree-update-menu-bar > 6 0% + require > 6 0% + > emojify-update-visible-emojis-background-after-window-scroll > 6 0% kill-this-buffer-enabled-p > 4 0% mode-line-default-help-echo > 3 0% + null > 9192 14% - ... > 9172 14% Automatic GC > 20 0% - kill-visual-line > 20 0% - kill-region > 20 0% - filter-buffer-substring > 20 0% - org-fold-core--buffer-substring-filter > 20 0% - buffer-substring--filter > 20 0% - #<compiled -0xf6f823dd60bce2> > 20 0% - apply > 20 0% - #<subr > F616e6f6e796d6f75732d6c616d626461_anonymous_lambda_18> > 20 0% - #<compiled -0x18bec5098484d202> > 20 0% - apply > 20 0% - #<compiled -0x10861dfcfb752f31> > 20 0% - #<compiled -0xf6f823dd60bce2> > 20 0% - #<compiled -0xf6f823dd60bce2> > 20 0% - apply > 20 0% - #<compiled -0xab81927f0743ad> > 20 0% + delete-and-extract-region > 7847 12% - command-execute > 5749 8% - funcall-interactively > 2963 4% + org-self-insert-command > 2186 3% + org-cycle > 148 0% + corfu-insert > 146 0% + execute-extended-command > 121 0% + org-return > 32 0% + #<lambda 0xb0f62da54c2c7> > 26 0% + #<lambda 0xb0f62da54c2cb> > 24 0% + mwim-beginning > 19 0% + org-delete-backward-char > 19 0% + org-kill-line > 9 0% + #<lambda 0xb0f62da54c2ec> > 6 0% + file-notify-handle-event > 2095 3% + byte-code > 1359 2% + timer-event-handler > 375 0% + org-appear--post-cmd > 160 0% + corfu--post-command > 61 0% + org-fragtog--post-cmd > 14 0% + emojify-update-visible-emojis-background-after-command > 11 0% guide-key/close-guide-buffer > 7 0% + flycheck-perform-deferred-syntax-check > 7 0% + flycheck-maybe-display-error-at-point-soon > 6 0% undo-auto--add-boundary > 6 0% + corfu--auto-post-command > 4 0% flycheck-error-list-update-source > 3 0% internal-timer-start-idle > 3 0% sp--post-command-hook-handler > --------------- > For fun, I tried a second time with font-lock-mode turned off. I didn't > notice much difference in speed subjectively. The profile showed an even > higher percentage for redisplay_internal, though I don't quite understand > exactly how much of that is triggered by the higher-order functions listed > below it, esp org-in-src-block-p and org-inside-latex-fragment-p. In any > case, here it is for ocmparison: > > 20128 80% - redisplay_internal (C function) > 7142 28% - assq > 908 3% - org-context > 12 0% org-inside-LaTeX-fragment-p > 6 0% + org-in-src-block-p > 3060 12% - substitute-command-keys > 2176 8% - #<compiled -0x1c8c1b3af6786af3> > 320 1% - kill-buffer > 237 0% - replace-buffer-in-windows > 197 0% - unrecord-window-buffer > 158 0% - assq-delete-all > 57 0% assoc-delete-all > 6 0% + delete-char > 215 0% + tab-bar-make-keymap > 97 0% + org-in-subtree-not-table-p > 94 0% + and > > 44 0% + not > 41 0% + keymap-canonicalize > 25 0% + #<compiled 0xf76e59543b881ee> > 22 0% + eval > 21 0% + jit-lock-function > 16 0% + org-entry-get > 15 0% org-at-table-p > 14 0% + #<compiled -0x16b737fc61e8f6c2> > 12 0% + vc-menu-map-filter > 10 0% + org-at-timestamp-p > 6 0% + let > 6 0% file-readable-p > 6 0% table--row-column-insertion-point-p > 4 0% + imenu-update-menubar > 4 0% eq > 3 0% + or > 3 0% org-inside-LaTeX-fragment-p > 3 0% kill-this-buffer-enabled-p > 3 0% display-graphic-p > 3 0% get-buffer-process > 3082 12% - ... > 3082 12% Automatic GC > 1546 6% - command-execute > 968 3% - byte-code > 968 3% - read-extended-command > 968 3% - completing-read-default > 968 3% - apply > 968 3% - vertico--advice > 695 2% + #<subr completing-read-default> > 578 2% - funcall-interactively > 534 2% - org-self-insert-command > 31 0% + org-fold-core--fix-folded-region > 25 0% + org-num--verify > 9 0% + flycheck-handle-change > 8 0% + org-element--cache-after-change > 7 0% + org-indent-refresh-maybe > 6 0% + jit-lock-after-change > 5 0% org-at-table-p > 4 0% org-fix-tags-on-the-fly > 3 0% > org-fold-check-before-invisible-edit--text-properties > 3 0% org-indent-notify-modified-headline > 12 0% + org-delete-backward-char > 4 0% + #<lambda 0xb0f62da54c2ec> > 3 0% + #<lambda 0xb0f62da54c2cb> > 279 1% + timer-event-handler > 26 0% + org-appear--post-cmd > 12 0% + emojify-update-visible-emojis-background-after-command > 9 0% + org-fragtog--post-cmd > 8 0% + undo-auto--add-boundary > 4 0% corfu--auto-post-command > 4 0% internal-timer-start-idle > 3 0% + flycheck-maybe-display-error-at-point-soon > > > ----------------------------------------- > > Does this look at all useful so far? > > [-- Attachment #2: Type: text/html, Size: 11953 bytes --]
Matt Price <moptop99@gmail.com> writes:
>> 20128 80% - redisplay_internal (C function)
>> 7142 28% - assq
>> 908 3% - org-context
Note that org-context is an obsolete function. Do you directly call it
in your config? Or do you use a third-party package calling org-context?
Best,
Ihor
Matt Price <moptop99@gmail.com> writes:
> Yes, it definitely seems to be related tofile size, which makes me think
> that some kind of buffer parsing is the cause of the problem.
Parsing would show up in the profiler report in such scenario. It is not
the case though. The problem might be invisible text (it would cause
redisplay become slow), but 15k lines is relatively small - it should
not cause redisplay issues according to my experience. Just to be sure,
I would try to check performance in a completely unfolded buffer.
Best,
Ihor
Dear all, Since there is at least a couple of people who might be interested, lets try to meet online on jitsi and debug performance issues you experience because of Org mode. Probably some time this Saturday (Feb 26). I am thinking about 9pm SG time (4pm Moscow; 8am New York; 1pm London). WDYT? Participants should preferably install the latest Org version from main. Older versions are also ok, but will be less of priority. Best, Ihor
[-- Attachment #1: Type: text/plain, Size: 709 bytes --] On Wed, Feb 23, 2022 at 12:22 AM Ihor Radchenko <yantar92@gmail.com> wrote: > Matt Price <moptop99@gmail.com> writes: > > >> 20128 80% - redisplay_internal (C function) > >> 7142 28% - assq > >> 908 3% - org-context > > Note that org-context is an obsolete function. Do you directly call it > in your config? Or do you use a third-party package calling org-context? > Hmm. I don't see it anywhere in my ~.emacs.d/elpa~ directory or in my config file. I also went through ORG-NEWS and while it mentions that org-context-p has been removed, I can't find a deprecation notice about org-context. I'm not quite sure what's going on. Will investigate further! > > Best, > Ihor > [-- Attachment #2: Type: text/html, Size: 1328 bytes --]
Matt Price <moptop99@gmail.com> writes: >> Note that org-context is an obsolete function. Do you directly call it >> in your config? Or do you use a third-party package calling org-context? >> > > Hmm. I don't see it anywhere in my ~.emacs.d/elpa~ directory or in my > config file. I also went through ORG-NEWS and while it mentions that > org-context-p has been removed, I can't find a deprecation notice about > org-context. I'm not quite sure what's going on. Will investigate further! That notice itself is WIP :facepalm: Basically, org-context is not reliable because is relies on fontification. See https://orgmode.org/list/877depxyo9.fsf@localhost Best, Ihor
On 22/02/2022 12:33, Ihor Radchenko wrote: > > I am wondering if many people in the list experience latency issues. Ihor, it is unlikely the feedback that you would like to get concerning the following patch: Ihor Radchenko. [PATCH 01/35] Add org-fold-core: new folding engine. Sat, 29 Jan 2022 19:37:53 +0800. https://list.orgmode.org/74cd7fc06a4540b1d63d1e7f9f2542f83e1eaaae.1643454545.git.yantar92@gmail.com but my question may be more appropriate in this thread. I noticed the following: > +;; the same purpose. Overlays are implemented with O(n) complexity in > +;; Emacs (as for 2021-03-11). It means that any attempt to move > +;; through hidden text in a file with many invisible overlays will > +;; require time scaling with the number of folded regions (the problem > +;; Overlays note of the manual warns about). For curious, historical > +;; reasons why overlays are not efficient can be found in > +;; https://www.jwz.org/doc/lemacs.html. The linked document consists of a lot of messages. Could you, please, provide more specific location within the rather long page?
Max Nikulin <manikulin@gmail.com> writes: >> +;; the same purpose. Overlays are implemented with O(n) complexity in >> +;; Emacs (as for 2021-03-11). It means that any attempt to move >> +;; through hidden text in a file with many invisible overlays will >> +;; require time scaling with the number of folded regions (the problem >> +;; Overlays note of the manual warns about). For curious, historical >> +;; reasons why overlays are not efficient can be found in >> +;; https://www.jwz.org/doc/lemacs.html. > > The linked document consists of a lot of messages. Could you, please, > provide more specific location within the rather long page? There is no specific location. That thread is an old drama unfolded when intervals were first implemented by a third-party company (they were called intervals that time). AFAIU, the fact that intervals are stored in a list and suffer from O(N) complexity originates from that time. Just history, as I pointed in the comment. FYI, a more optimal overlay data structure implementation has been attempted in feature/noverlay branch (for example, see https://git.savannah.gnu.org/cgit/emacs.git/commit/?h=feature/noverlay&id=8d7bdfa3fca076b34aaf86548d3243bee11872ad). But there is no activity on that branch for years. Best, Ihor
On Wed, Feb 23, 2022 at 7:37 AM Ihor Radchenko <yantar92@gmail.com> wrote:
>
> Dear all,
>
> Since there is at least a couple of people who might be interested, lets
> try to meet online on jitsi and debug performance issues you experience
> because of Org mode. Probably some time this Saturday (Feb 26). I am
> thinking about 9pm SG time (4pm Moscow; 8am New York; 1pm London). WDYT?
That time will work for me. Thanks!
On 23/02/2022 23:35, Ihor Radchenko wrote:
> Max Nikulin writes:
>
>>> +;; the same purpose. Overlays are implemented with O(n) complexity in
>>> +;; Emacs (as for 2021-03-11). It means that any attempt to move
>>> +;; through hidden text in a file with many invisible overlays will
>>> +;; require time scaling with the number of folded regions (the problem
>>> +;; Overlays note of the manual warns about). For curious, historical
>>> +;; reasons why overlays are not efficient can be found in
>>> +;; https://www.jwz.org/doc/lemacs.html.
>>
>> The linked document consists of a lot of messages. Could you, please,
>> provide more specific location within the rather long page?
>
> There is no specific location. That thread is an old drama unfolded when
> intervals were first implemented by a third-party company (they were called
> intervals that time). AFAIU, the fact that intervals are stored in a
> list and suffer from O(N) complexity originates from that time. Just
> history, as I pointed in the comment.
Thank you, Ihor. I am still not motivated enough to read whole page but
searching for "interval" (earlier I tried "overlay") resulted in the
following message:
Message-ID: <9206230917.AA16758@mole.gnu.ai.mit.edu>
Date: Tue, 23 Jun 92 05:17:33 -0400
From: rms@gnu.ai.mit.edu (Richard Stallman)
describing tree balancing problem in GNU Emacs and linear search in lucid.
Unfortunately there is no "id" or "name" anchors in the file suitable to
specify precise location. Even the link href is broken.
Actually I suspect that markers may have a similar problem during regexp
searches. I am curious if it is possible to invoke a kind of "vacuum"
(in SQL parlance). Folding all headings and resetting refile cache does
not restore performance to the initial state at session startup. Maybe
it is effect of incremental searches.
Sorry, I have not tried patches for text properties instead of overlays.
Ihor Radchenko <yantar92@gmail.com> writes:
> Dear all,
>
> Since there is at least a couple of people who might be interested, lets
> try to meet online on jitsi and debug performance issues you experience
> because of Org mode. Probably some time this Saturday (Feb 26). I am
> thinking about 9pm SG time (4pm Moscow; 8am New York; 1pm London). WDYT?
>
> Participants should preferably install the latest Org version from main.
> Older versions are also ok, but will be less of priority.
I will post the link one hour before the meeting start.
Best,
Ihor
Max Nikulin <manikulin@gmail.com> writes: > Thank you, Ihor. I am still not motivated enough to read whole page but > searching for "interval" (earlier I tried "overlay") resulted in the > following message: > > Message-ID: <9206230917.AA16758@mole.gnu.ai.mit.edu> > Date: Tue, 23 Jun 92 05:17:33 -0400 > From: rms@gnu.ai.mit.edu (Richard Stallman) > > describing tree balancing problem in GNU Emacs and linear search in lucid. > > Unfortunately there is no "id" or "name" anchors in the file suitable to > specify precise location. Even the link href is broken. I think we have a misunderstanding here. That page does not contain much of technical details. Rather a history. AFAIU, initially Emacs wanted to implement balanced tree structure to store overlays, but the effort stalled for a long time. Then, a company rolled out a simple list storage causing a lot of contradiction related to FSF and a mojor Emacs fork. At the end, the initial effort using balanced tree on GNU Emacs side did not go anywhere and GNU Emacs eventually copied a simple list approach that is backfiring now, when Org buffers actually do contain a large numbers of overlays. > Actually I suspect that markers may have a similar problem during regexp > searches. I am curious if it is possible to invoke a kind of "vacuum" > (in SQL parlance). Folding all headings and resetting refile cache does > not restore performance to the initial state at session startup. Maybe > it is effect of incremental searches. I doubt that markers have anything to do with regexp search itself (directly). They should only come into play when editing text in buffer, where their performance is also O(N_markers). Best, Ihor
Ihor Radchenko <yantar92@gmail.com> writes: > Ihor Radchenko <yantar92@gmail.com> writes: > >> Dear all, >> >> Since there is at least a couple of people who might be interested, lets >> try to meet online on jitsi and debug performance issues you experience >> because of Org mode. Probably some time this Saturday (Feb 26). I am >> thinking about 9pm SG time (4pm Moscow; 8am New York; 1pm London). WDYT? >> >> Participants should preferably install the latest Org version from main. >> Older versions are also ok, but will be less of priority. > > I will post the link one hour before the meeting start. The link is https://meet.jit.si/Org-dev-profiling-20220226-d708k Password: plaintext Best, Ihor
On 26/02/2022 14:45, Ihor Radchenko wrote: > > I think we have a misunderstanding here. That page does not contain much > of technical details. Rather a history. Thank you for clarification. Certainly originally I had a hope to get some explanation why it was not implemented in a more efficient way. At first I read starting part of the text. It was still interesting to read the story that due to delay of Emacs release people had to fork it into Lucid. I did not know that XEmacs was a successor of Lucid. > Max Nikulin writes: >> Actually I suspect that markers may have a similar problem during regexp >> searches. I am curious if it is possible to invoke a kind of "vacuum" >> (in SQL parlance). Folding all headings and resetting refile cache does >> not restore performance to the initial state at session startup. Maybe >> it is effect of incremental searches. > > I doubt that markers have anything to do with regexp search itself > (directly). They should only come into play when editing text in buffer, > where their performance is also O(N_markers). I believed, your confirmed my conclusion earlier: Ihor Radchenko. Re: [BUG] org-goto slows down org-set-property. Sun, 11 Jul 2021 19:49:08 +0800. https://list.orgmode.org/orgmode/87lf6dul3f.fsf@localhost/
Ihor Radchenko <yantar92@gmail.com> writes: > Ihor Radchenko <yantar92@gmail.com> writes: > >> Ihor Radchenko <yantar92@gmail.com> writes: >> >>> Dear all, >>> >>> Since there is at least a couple of people who might be interested, lets >>> try to meet online on jitsi and debug performance issues you experience >>> because of Org mode. Probably some time this Saturday (Feb 26). I am >>> thinking about 9pm SG time (4pm Moscow; 8am New York; 1pm London). WDYT? >>> >>> Participants should preferably install the latest Org version from main. >>> Older versions are also ok, but will be less of priority. >> >> I will post the link one hour before the meeting start. > > The link is https://meet.jit.si/Org-dev-profiling-20220226-d708k > Password: plaintext FYI, we got an issue with meet.jit.si. Moving the meeting to a different server: https://teamjoin.de/Org-dev-profiling-20220226-d708k Sorry for the last-minute update. Best, Ihor
Open up XMPP group for Org mode, that Jabber chat is lightweight and accessible through Emacs jabber.el and plethora of other applications.
Don't forget to include Org links to XMPP groups.
On February 22, 2022 5:33:13 AM UTC, Ihor Radchenko <yantar92@gmail.com> wrote:
>Samuel Wales <samologist@gmail.com> writes:
>
>> i have been dealing with latency also, often in undo-tree. this
>might
>> be a dumb suggestion, but is it related to org file size? my files
>> have not really grown /that/ much but maybe you could bisect one. as
>> opposed to config.
>
>I am wondering if many people in the list experience latency issues.
>Maybe we can organise an online meeting (jitsi or BBB) and collect the
>common causes/ do online interactive debugging?
>
>Best,
>Ihor
Jean
I am not sure if I am helpful. But I am an org user. I am not a developer. I would like to contribute any testing. Currently I use version 9.3. I do not want to install anything which is outside my operating system distribution software versions (Hyperbola GNU 0.4). I am also unable to use microphone and camera with the web browser because of security concerns by the distribution developers. I have used ERC, mailing lists, Tox and Mumble successfully with this distribution. With these caveats, please consider my contributions, if required.
Max Nikulin <manikulin@gmail.com> writes:
>> Max Nikulin writes:
>>> Actually I suspect that markers may have a similar problem during regexp
>>> searches. I am curious if it is possible to invoke a kind of "vacuum"
>>> (in SQL parlance). Folding all headings and resetting refile cache does
>>> not restore performance to the initial state at session startup. Maybe
>>> it is effect of incremental searches.
>>
>> I doubt that markers have anything to do with regexp search itself
>> (directly). They should only come into play when editing text in buffer,
>> where their performance is also O(N_markers).
>
> I believed, your confirmed my conclusion earlier:
>
> Ihor Radchenko. Re: [BUG] org-goto slows down org-set-property.
> Sun, 11 Jul 2021 19:49:08 +0800.
> https://list.orgmode.org/orgmode/87lf6dul3f.fsf@localhost/
I confirmed that invoking org-refile-get-targets slows down your nm-tst
looping over the headlines.
However, the issue is not with outline-next-heading there. Profiling
shows that the slowdown mostly happens in org-get-property-block
I have looked into regexp search C source and I did not find anything
that could depend on the number markers in buffer.
After further analysis now (after your email), I found that I may be
wrong and regexp search might actually be affected.
Now, I did an extended profiling of what is happening using perf:
;; perf cpu with refile cache (using your previous code on my largest Org buffer)
19.68% [.] mark_object
6.20% [.] buf_bytepos_to_charpos
5.66% [.] re_match_2_internal
5.33% [.] exec_byte_code
5.07% [.] rpl_re_search_2
3.09% [.] Fmemq
2.56% [.] allocate_vectorlike
1.86% [.] sweep_vectors
1.47% [.] mark_objects
1.45% [.] pdumper_marked_p_impl
;; perf cpu without refile cache (removing getting refile targets from the code)
18.79% [.] mark_object
8.23% [.] re_match_2_internal
5.88% [.] rpl_re_search_2
4.06% [.] buf_bytepos_to_charpos
3.06% [.] Fmemq
2.45% [.] allocate_vectorlike
1.63% [.] exec_byte_code
1.50% [.] pdumper_marked_p_impl
The bottleneck appears to be buf_bytepos_to_charpos, called by
BYTE_TO_CHAR macro, which, in turn, is used by set_search_regs
buf_bytepos_to_charpos contains the following loop:
for (tail = BUF_MARKERS (b); tail; tail = tail->next)
{
CONSIDER (tail->bytepos, tail->charpos);
/* If we are down to a range of 50 chars,
don't bother checking any other markers;
scan the intervening chars directly now. */
if (best_above - bytepos < distance
|| bytepos - best_below < distance)
break;
else
distance += BYTECHAR_DISTANCE_INCREMENT;
}
I am not sure if I understand the code correctly, but that loop is
clearly scaling performance with the number of markers
Finally, FYI. I plan to work on an alternative mechanism to access Org
headings - generic Org query library. It will not use markers and
implement ideas from org-ql. org-refile will eventually use that generic
library instead of current mechanism.
Best,
Ihor
Ihor Radchenko <yantar92@gmail.com> writes: > Ihor Radchenko <yantar92@gmail.com> writes: > >> Ihor Radchenko <yantar92@gmail.com> writes: >> >>> Ihor Radchenko <yantar92@gmail.com> writes: >>> >>>> Dear all, >>>> >>>> Since there is at least a couple of people who might be interested, lets >>>> try to meet online on jitsi and debug performance issues you experience >>>> because of Org mode. Probably some time this Saturday (Feb 26). I am >>>> thinking about 9pm SG time (4pm Moscow; 8am New York; 1pm London). WDYT? >>>> >>>> Participants should preferably install the latest Org version from main. >>>> Older versions are also ok, but will be less of priority. >>> >>> I will post the link one hour before the meeting start. Summary of the meeting: 1. Fairly long discussion about performance of text properties vs. overlays in Emacs 2. Debugging Emacs hangs during ox-hugo export (see also https://github.com/kaushalmodi/ox-hugo/discussions/551#discussioncomment-2104352). The problem appears to be not performance per se, but rather some bug causing infinite loop in org-elment-cache-map. The problem is not reproducible with my own config, so remote debugging will remain the only good option. To be continued. Best, Ihor
On 27/02/2022 13:43, Ihor Radchenko wrote: > > Now, I did an extended profiling of what is happening using perf: > > 6.20% [.] buf_bytepos_to_charpos Maybe I am interpreting such results wrongly, but it does not look like a bottleneck. Anyway thank you very much for such efforts, however it is unlikely that I will join to profiling in near future. > buf_bytepos_to_charpos contains the following loop: > > for (tail = BUF_MARKERS (b); tail; tail = tail->next) > { > CONSIDER (tail->bytepos, tail->charpos); > > /* If we are down to a range of 50 chars, > don't bother checking any other markers; > scan the intervening chars directly now. */ > if (best_above - bytepos < distance > || bytepos - best_below < distance) > break; > else > distance += BYTECHAR_DISTANCE_INCREMENT; > } > > I am not sure if I understand the code correctly, but that loop is > clearly scaling performance with the number of markers I may be terribly wrong, but it looks like an optimization attempt that may actually ruin performance. My guess is the following. Due to multibyte characters position in buffer counted in characters may significantly differ from index in byte sequence. Since markers have both values bytepos and charpos, they are used (when available) to narrow down initial estimation interval [0, buffer size) to nearest existing markers. The code below even creates temporary markers to make next call of the function faster. It seems, buffers do not have any additional structures that track size in bytes and in characters of spans (I would not expect that representation of whole buffer in memory is single contiguous byte array). When there are no markers at all, the function has to iterate over each character and count its length. The problem is that when the buffer has a lot of markers far aside from the position passed as argument, then iteration over markers just consumes CPU with no significant improvement of original estimation of boundaries. If markers were organized in a tree than search would be much faster (at least for buffers with a lot of markers. In some cases such function may take a hint: previous known bytepos+charpos pair. I hope I missed something, but what I can expect from the code of buf_bytepos_to_charpos is that it is necessary to iterate over all markers to update positions after each typed character. > Finally, FYI. I plan to work on an alternative mechanism to access Org > headings - generic Org query library. It will not use markers and > implement ideas from org-ql. org-refile will eventually use that generic > library instead of current mechanism. I suppose that markers might be implemented in an efficient way, and much better performance may be achieved when low-level data structures are accessible. I am in doubts concerning attempts to create something that resembles markers but based purely on high-level API.
Max Nikulin <manikulin@gmail.com> writes: > On 27/02/2022 13:43, Ihor Radchenko wrote: >> >> Now, I did an extended profiling of what is happening using perf: >> >> 6.20% [.] buf_bytepos_to_charpos > > Maybe I am interpreting such results wrongly, but it does not look like > a bottleneck. Anyway thank you very much for such efforts, however it is > unlikely that I will join to profiling in near future. The perf data I provided is a bit tricky. I recorded statistics over the whole Emacs session + used fairly small number of iterations in your benchmark code. Now, I repeated the testing plugging perf to Emacs only during the benchmark execution: With refile cache and markers: 22.82% emacs-29.0.50.1 emacs-29.0.50.1 [.] buf_bytepos_to_charpos 16.68% emacs-29.0.50.1 emacs-29.0.50.1 [.] rpl_re_search_2 8.02% emacs-29.0.50.1 emacs-29.0.50.1 [.] re_match_2_internal 6.93% emacs-29.0.50.1 emacs-29.0.50.1 [.] Fmemq 4.05% emacs-29.0.50.1 emacs-29.0.50.1 [.] allocate_vectorlike 1.88% emacs-29.0.50.1 emacs-29.0.50.1 [.] mark_object Without refile cache: 17.25% emacs-29.0.50.1 emacs-29.0.50.1 [.] rpl_re_search_2 15.84% emacs-29.0.50.1 emacs-29.0.50.1 [.] buf_bytepos_to_charpos 8.89% emacs-29.0.50.1 emacs-29.0.50.1 [.] re_match_2_internal 8.00% emacs-29.0.50.1 emacs-29.0.50.1 [.] Fmemq 4.35% emacs-29.0.50.1 emacs-29.0.50.1 [.] allocate_vectorlike 2.01% emacs-29.0.50.1 emacs-29.0.50.1 [.] mark_object Percents should be adjusted for larger execution time in the first dataset, but otherwise it is clear that buf_bytepos_to_charpos dominates the time delta. >> I am not sure if I understand the code correctly, but that loop is >> clearly scaling performance with the number of markers > > I may be terribly wrong, but it looks like an optimization attempt that > may actually ruin performance. My guess is the following. Due to > multibyte characters position in buffer counted in characters may > significantly differ from index in byte sequence. Since markers have > both values bytepos and charpos, they are used (when available) to > narrow down initial estimation interval [0, buffer size) to nearest > existing markers. The code below even creates temporary markers to make > next call of the function faster. I tend to agree after reading the code again. I tried to play around with that marker loop. It seems that the loop should not be mindlessly disabled, but it can be sufficient to check only a small number of markers in front of the marker list. The cached temporary markers are always added in front of the list. Limiting the number of checked markers to 10, I got the following result: With threshold and refile cache: | 9.5.2 | | | | | nm-tst | 28.060029337 | 4 | 1.8427608629999996 | | org-refile-get-targets | 3.2445615439999997 | 0 | 0.0 | | nm-tst | 33.648259137000004 | 4 | 1.2304310540000003 | | org-refile-cache-clear | 0.034879062 | 0 | 0.0 | | nm-tst | 23.974124596 | 5 | 1.4291488149999996 | Markers add +~5.6sec. Original Emacs code and refile cache: | 9.5.2 | | | | | nm-tst | 29.494383528 | 4 | 3.0368508530000002 | | org-refile-get-targets | 3.635947646 | 1 | 0.4542479730000002 | | nm-tst | 36.537926593 | 4 | 1.1297576349999998 | | org-refile-cache-clear | 0.009665364999999999 | 0 | 0.0 | | nm-tst | 23.283457105 | 4 | 1.0536496499999997 | Markers add +7sec. The improvement is there, though markers still somehow come into play. I speculate that limiting the number of checked markers might also force adding extra temporary markers to the list, but I haven't looked into that possibility for now. It might be better to discuss with emacs-devel before trying too hard. >> Finally, FYI. I plan to work on an alternative mechanism to access Org >> headings - generic Org query library. It will not use markers and >> implement ideas from org-ql. org-refile will eventually use that generic >> library instead of current mechanism. > > I suppose that markers might be implemented in an efficient way, and > much better performance may be achieved when low-level data structures > are accessible. I am in doubts concerning attempts to create something > that resembles markers but based purely on high-level API. I am currently using a custom version of org-ql utilising the new element cache. It is substantially faster compared to current org-refile-get-targets. The org-ql version runs in <2 seconds at worst when calculating all refile targets from scratch, while org-refile-get-targets is over 10sec. org-ql version gives 0 noticeable latency when there is an extra text query to narrow down the refile targets. So, is it certainly possible to improve the performance just using high-level org-element cache API + regexp search without markers. Note that we already have something resembling markers on high-level API. It is what org element cache is doing - on every user edit, it re-calculates the Org element boundaries (note that Nicolas did not use markers to store boundaries of org elements). The merged headline support by org-element cache is the first stage of my initial plan to speed up searching staff in Org - be it agenda items, IDs, or refile targets. Best, Ihor
On 02/03/2022 22:12, Ihor Radchenko wrote: > Max Nikulin writes: > > I tend to agree after reading the code again. > I tried to play around with that marker loop. It seems that the loop > should not be mindlessly disabled, but it can be sufficient to check > only a small number of markers in front of the marker list. The cached > temporary markers are always added in front of the list. I did not try to say that the loop over markers may be just thrown away. By the way, for sequential scan (with no backward searches) single marker might work reasonably well. Some kind of index for fast mapping between bytes and positions should be maintained at the buffer level. I hope, when properly designed, such structure may minimize amount of recalculations on each edit. I mean some hierarchical structure of buffer fragments and markers keeps relative offsets from beginning of the fragment they belong to. Hierarchy of fragments is enough to provide initial estimation of position for byte index. Only markers within the fragment that is changed need immediate update. > I am currently using a custom version of org-ql utilising the new > element cache. It is substantially faster compared to current > org-refile-get-targets. The org-ql version runs in <2 seconds at worst > when calculating all refile targets from scratch, while > org-refile-get-targets is over 10sec. org-ql version gives 0 noticeable > latency when there is an extra text query to narrow down the refile > targets. So, is it certainly possible to improve the performance just > using high-level org-element cache API + regexp search without markers. It is up to you to choose at which level your prefer to optimize the code. And it is only my opinion (I do not insist) that benefits from changes in low level code might be much more significant. I like the idea of markers, but their current implementation is a source of pain. > (note that Nicolas did not use > markers to store boundaries of org elements). E.g. export-related code certainly does need markers. You experienced enough problems with attempts to properly invalidate cache when lower level is not supposed to provide appropriate facilities.
Max Nikulin <manikulin@gmail.com> writes:
> It is up to you to choose at which level your prefer to optimize the
> code. And it is only my opinion (I do not insist) that benefits from
> changes in low level code might be much more significant. I like the
> idea of markers, but their current implementation is a source of pain.
>
>> (note that Nicolas did not use
>> markers to store boundaries of org elements).
>
> E.g. export-related code certainly does need markers. You experienced
> enough problems with attempts to properly invalidate cache when lower
> level is not supposed to provide appropriate facilities.
I understand your argument. However, I feel discouraged to contribute to
Emacs devel because, most of Org users will not benefit from such
contribution for a long time. Not until next several major versions of
Emacs will be released. So, I currently prefer to contribute some
backwards-compatible high-level code and leave Emacs core for future.
Best,
Ihor
Dear all, There were several people who came to the last meetup looking for information about debugging Org mode. The last meetup was rather unhelpful in this regard since we dove into a specific use-case. I plan to try once more providing a more general introduction to Org (and Emacs) debugging. Tentatively, I plan to talk about: 1. Running Emacs with clean configuration + latest version of Org 2. Bisecting config to find configuration-related issues 3. Using Emacs profiler and sharing profiler results 4. Answer any questions on the first three topics After the introduction, we can continue with interactive debugging if there is anyone experiencing performance (or other) issues with Org and willing to share screen. Note that using microphone and/or camera should not be required. Jitsi does have chat. The time will be the same: 9pm SG time (4pm Moscow; 8am New York; 1pm London). Sat, Mar 26 I will post the link to the meeting one hour before the meeting start. Best, Ihor
Ihor Radchenko <yantar92@gmail.com> writes:
> The time will be the same: 9pm SG time (4pm Moscow; 8am New York; 1pm
> London). Sat, Mar 26
**8am New York -> 9am
I missed the day saving time compared to the last meeting.
Best,
Ihor
On Wed, Mar 23, 2022 at 7:10 AM Ihor Radchenko <yantar92@gmail.com> wrote: > > Dear all, > > There were several people who came to the last meetup looking for > information about debugging Org mode. The last meetup was rather > unhelpful in this regard since we dove into a specific use-case. > > I plan to try once more providing a more general introduction to Org > (and Emacs) debugging. Tentatively, I plan to talk about: > 1. Running Emacs with clean configuration + latest version of Org > 2. Bisecting config to find configuration-related issues > 3. Using Emacs profiler and sharing profiler results > 4. Answer any questions on the first three topics This is a great idea, Ihor. Have you considered recording this part and sharing it? Bruce > After the introduction, we can continue with interactive debugging if > there is anyone experiencing performance (or other) issues with Org and > willing to share screen. > > Note that using microphone and/or camera should not be required. Jitsi > does have chat. > > The time will be the same: 9pm SG time (4pm Moscow; 8am New York; 1pm > London). Sat, Mar 26 > > I will post the link to the meeting one hour before the meeting start. > > Best, > Ihor >
[-- Attachment #1: Type: text/plain, Size: 977 bytes --] On Thu, Mar 24, 2022 at 7:28 AM Bruce D'Arcus <bdarcus@gmail.com> wrote: > On Wed, Mar 23, 2022 at 7:10 AM Ihor Radchenko <yantar92@gmail.com> wrote: > > > > Dear all, > > > > There were several people who came to the last meetup looking for > > information about debugging Org mode. The last meetup was rather > > unhelpful in this regard since we dove into a specific use-case. > > > > I plan to try once more providing a more general introduction to Org > > (and Emacs) debugging. Tentatively, I plan to talk about: > > 1. Running Emacs with clean configuration + latest version of Org > > 2. Bisecting config to find configuration-related issues > > 3. Using Emacs profiler and sharing profiler results > > 4. Answer any questions on the first three topics > > This is a great idea, Ihor. Have you considered recording this part > and sharing it? > > I was just going to ask the same thing! I missed the last one too and would like to benefit from your efforts this time. [-- Attachment #2: Type: text/html, Size: 1442 bytes --]
"Bruce D'Arcus" <bdarcus@gmail.com> writes:
>> I plan to try once more providing a more general introduction to Org
>> (and Emacs) debugging. Tentatively, I plan to talk about:
>> 1. Running Emacs with clean configuration + latest version of Org
>> 2. Bisecting config to find configuration-related issues
>> 3. Using Emacs profiler and sharing profiler results
>> 4. Answer any questions on the first three topics
>
> This is a great idea, Ihor. Have you considered recording this part
> and sharing it?
I was thinking about recording. I can probably record the parts where I
talk, but I will need a consent from others for everything else. Also, I
am not sure how to share such recording. I am reluctant to use youtube.
Or I may sum everything up into a text post.
Best,
Ihor
Ihor Radchenko <yantar92@gmail.com> writes: > The time will be the same: 9pm SG time (4pm Moscow; 8am New York; 1pm > London). Sat, Mar 26 > > I will post the link to the meeting one hour before the meeting start. Here is the link https://teamjoin.de/Org-dev-profiling-20220326-d708k Best, Ihor
Ihor Radchenko <yantar92@gmail.com> writes: >> The time will be the same: 9pm SG time (4pm Moscow; 8am New York; 1pm >> London). Sat, Mar 26 >> >> I will post the link to the meeting one hour before the meeting start. > > Here is the link https://teamjoin.de/Org-dev-profiling-20220326-d708k The recording is available at https://open.tube/videos/watch/4d819114-43bf-42df-af94-f94fc53dd0d9 Summary of the talk: Table of Contents ───────────────── 1. Testing bugs in clean environment and latest Org .. 1. Org manual! .. 2. Alternative demo .. 3. What to report 2. Testing bugs in personal config (bisecting config) 3. Using Emacs profiler and sharing profiler results .. 1. The basic idea .. 2. Profile buffer .. 3. Caveats :ATTACH: 1 Testing bugs in clean environment and latest Org ══════════════════════════════════════════════════ 1.1 Org manual! ─────────────── <https://orgmode.org/> -> Contribute -> Feedback (yes, it is a bit obscure) -> <https://orgmode.org/org.html#Feedback> 1.2 Alternative demo ──────────────────── • Fetch the latest Org <https://orgmode.org> ┌──── │ cd /tmp/ # on Linux, can be any other dir. │ git clone git://git.sv.gnu.org/emacs/org-mode.git # You need git to be installed. └──── Alternative: <https://elpa.gnu.org/packages/org.html> (only latest stable version aka bugfix branch) • Create minimal working environment ┌──── │ cd org-mode │ git checkout main │ # or │ # git checkout bugfix │ make cleanall # useful if you re-use the already downloaded dir │ make autoloads # auto-generate some files for Emacs └──── • Run clean Emacs ┌──── │ emacs -Q -L ./lisp -l org │ # or to open a clean org buffer │ # emacs -Q -L ./lisp -l org /tmp/test.org │ # or use a minimal configuration saved in /tmp/test.el, if required │ emacs -Q -L ./lisp -l org -l /tmp/test.el /tmp/test.org └──── • Enable extra debugging Put the following into test.el ┌──── │ ;; Activate generic debugging. │ (setq debug-on-error t │ debug-on-signal nil │ debug-on-quit nil) │ ;; Activate org-element debugging. │ (setq org-element--cache-self-verify 'backtrace │ org-element--cache-self-verify-frequency 1.0 ; can be less if there are lags. │ org-element--cache-map-statistics t) └──── 1.3 What to report ────────────────── There is some common information we find extremely useful when diagnosing bug reports. • The easiest is using M-x `org-submit-bug-report' • Most of common require info will be auto-inserted into email • You don't have to configure Emacs for sending email. Can simply use `org-submit-bug-report' and copy-paste the text into email client of choice. • If there are warnings, can also share what is inside `*Warnings*' buffer: M-: `(switch-to-buffer "*Warnings*")' • Same for `*Messages*' M-: `(switch-to-buffer "*Messages*")' • Screenshots are often helpful 2 Testing bugs in personal config (bisecting config) ════════════════════════════════════════════════════ <https://github.com/Malabarba/elisp-bug-hunter> • M-x `bug-hunter-init-file' • Usually works out of the box, but may not give useful results when `init.el' is a single sexp block ┌──── │ (let ((org-file '"/home/yantar92/Git/emacs-config/config.org") │ (el-file '"~/.emacs.d/config.el")) │ (setq init-flag t) │ (setq comp-deferred-compilation-deny-list '("pdf-cache" "org-protocol")) │ (load el-file)) └──── • Then, need to dump the actual config into `init.el' • Sometimes, a bug in personal config is caused by interaction between two packages ┌──── │ (require 'package1) │ ;; some setting causing package1 to break, but only when package2 below is loaded │ (require 'package2) └──── • `bug-hunter' will then point to `(require 'package2)' as the problematic line, instead of the actual setting • It can help then to reshuffle the config, so that `package1' and `package2' are loaded early: ┌──── │ (require 'package1) │ (require 'package2) │ ;; some setting causing package1 to break, but only when package2 below is loaded └──── 3 Using Emacs profiler and sharing profiler results ═══════════════════════════════════════════════════ 3.1 The basic idea ────────────────── 1. M-x `profiler-stop' *Important*: if a profiler is already running, the report will contain irrelevant data • `profiler-stop' may not be available right after Emacs start. If it is not listed in M-x completions, no need to run it 2. M-x `profiler-start' <RET> `cpu' <RET> 3. Do slow staff you want to test 4. M-x `profiler-report' 5. M-x `profiler-report-write-profile' 6. Attach the report file to you bug report 7. (FYI) M-x `profiler-find-profile' can be used to view the saved report later 3.2 Profile buffer ────────────────── • You can <TAB> to fold/unfold entries • … can reveal useful info! • so does `redisplay_internal (C function)' • Useful staff reveals itself as "%" value changes noticeable deeper into the nested tree 3.3 Caveats ─────────── • If your Emacs hangs for a long time while recording a profile and you abort with `C-g', profiler will likely contain garbage data • Calling M-x `profiler-report' twice in a row will not give anything useful The second call will profile actions done between the first and second calls. • Profiler does not show how frequently a function is called • Information on number of calls can be obtained using other kind of profiler: `ELP' ┌──── │ (require 'elp) │ (elp-restore-all) ;; Cleanup │ (elp-instrument-function #'org-element-cache-map) ; or any other function │ ;; Do slow staff. │ (elp-results) └──── • Byte-compilation and native-compilation can sometimes create cryptic profiles • It helps to go to function definition manually and re-evaluate it 1. M-x `describe-function' <RET> `function-name' <RET> 2. Go to the definition "… is an interactive native compiled Lisp function in ‘some-file-click-on-it.el’." 3. C-M-x (or M-x `eval-defun') 4. Redo the profiling Best, Ihor
Dear all, I am continuing my experiment with Org mode meetups and online debugging. This time, I plan to 1. Talk about contributing patches to Org - Applying patches sent by others - Testing changes (make test) - Creating patches - Sending patches to the mailing list 2. Talk about and debug any issues related to Org interactively via screen sharing. Note that using microphone and/or camera should not be required. Jitsi does have chat. The time will be the same: 9pm SG time (4pm Kyiv; 2pm London; 9am New York). Sat, Apr 23 I will post the link to the meeting one hour before the meeting start. Best, Ihor
Ihor Radchenko <yantar92@gmail.com> writes: > I will post the link to the meeting one hour before the meeting start. https://teamjoin.de/Org-dev-profiling-202204-23-d708k See you, Ihor
[-- Attachment #1: Type: text/plain, Size: 290 bytes --] Ihor Radchenko <yantar92@gmail.com> writes: > Ihor Radchenko <yantar92@gmail.com> writes: > >> I will post the link to the meeting one hour before the meeting start. > > https://teamjoin.de/Org-dev-profiling-202204-23-d708k Summary of the discussion is in the attached .org. Best, Ihor [-- Attachment #2: summary.org --] [-- Type: application/vnd.lotus-organizer, Size: 9936 bytes --] # Created 2022-04-24 Sun 12:25 #+title: <2022-04-23 Sat 20:00>--<2022-04-23 Sat 23:00> Ihor Radchenko [ML:Org mode] (2022) #3 Org mode profiling meetup on Sat, Apr 23 (was: #2 Org mode profiling meetup on Sat, Mar 26) #+date: May 11, 2020 #+author: Ihor Radchenko Meeting link: https://teamjoin.de/Org-dev-profiling-202204-23-d708k Summary of the discussions: - marking bugs in updates.orgmode.org: Woof! - This is mainly for Org maintainers - There are some bugs in the current version of the mailing list control code - It should be updated in the coming weeks though, according to Bastien - patches backlog and merge policy - We have accumulating patch backlog at https://updates.orgmode.org - Mostly because our main maintainer has been busy - We need more maintainers! Feel free to apply by writing to Bastien (https://bzg.fr/en/) - I recently figured that maintainers with write access can freely push new feature patches to unmaintained (with author/maintainer missing or not being active on the list) files. - debugging infinite recursion in org-eldoc - https://list.orgmode.org/CAFyQvY1QNfxBOrCor3pLR3MoMpMemD9znhX+GaV4nQKiDS=bjQ@mail.gmail.com/T/#u - debugging recent bug report in to-be-merged org-fold branch (fontification) - https://orgmode.org/list/8735i5gd8n.fsf@gmail.com - I now managed to fix it. Going to push soon - searching specific changes via magit-log - Some places to Org codebase may be hard to understand - Using ~magit-file-dispatch~ allows to search git log history associated with selected region - Commit messages in the history may reveal why one or another piece of code is there Also, I have prepared and even discussed small pieces of the presentation below. However, most of the people who joined the meeting already knew all that or were not interested. Still leaving it below to make it not go to complete waste. * Contributing patches to Org Before we start: 1. Clone the latest Org repo (see https://orgmode.org) #+begin_src bash git clone git://git.sv.gnu.org/emacs/org-mode.git cd org-mode #+end_src 2. If you are contributing/testing a new feature #+begin_src bash git checkout main #+end_src 3. If you are contributing/testing a bugfix #+begin_src bash git checkout bugfix # later, also need to confirm that everything works fine on main #+end_src 4. Use Magit! https://magit.vc/ - Changing branch is ~magit-branch~ "b" -> branch "b" ** Applying patches sent by email Case 1: Attachments to emails - Attachments can be simply saved - It is a good practice to create a temporary branch - =magit-branch= "b" -> new "c" -> starting at main/bugfix -> patch/mypatch-or-any-other-name - ~magit-patch~ or "W" in magit status buffer -> Apply patches (w) -> patches (w) - *Or just use piem* (see below) Case 2: Embedded patches (I do not like them) - Can directly use ~git-am~, but need to remember all that command line args - http://git-scm.com/docs/git-am - I prefer https://git.kyleam.com/piem #+begin_src emacs-lisp (require 'piem) (add-to-list 'piem-inboxes '( "Org mode" :coderepo "~/Git/org-mode/" :address "emacs-orgmode@gnu.org" :listid "emacs-orgmode.gnu.org" :url "https://orgmode.org/list/" :maildir "~/Mail/Orgmode-maillist/orgmode/")) (piem-notmuch-mode +1) ;; (piem-gnus-mode +1) ;; (piem-elfeed-mode +1) ;; (piem-rmail-mode +1) #+end_src - Just run ~M-x piem-am~ from email buffer - It will do everything from Case 1 automatically ** Testing Org mode patches It's generally easy: #+begin_src emacs-lisp cd org-mode make test #+end_src Result (good one): #+begin_quote Ran 927 tests, 927 results as expected, 0 unexpected (2022-04-23 18:39:33+0800, 25.511741 sec) 15 expected failures #+end_quote For more in-depth testing, there are two things to consider: 1. Different emacs versions #+begin_src emacs-lisp # emacs-26 is executable name or path to install Emacs version 26 make cleanall make EMACS=emacs-26 test #+end_src 2. Different language environments #+begin_src bash LANG="de_DE.UTF-8" make test #+end_src :CATCHALL-SCRIPT: #+begin_src bash #!/bin/bash # [[file:../../Org/system-config.org::*Testing emacs repo][Testing emacs repo:1]] function yes_or_no { while true; do read -p "$* [y/n]: " yn case $yn in [Yy]*) return 0 ;; [Nn]*) echo "Aborted" ; return 1 ;; esac done } set -e make cleanall make EMACS=emacs-26 $* test || (echo "Failed to run tests using $(emacs-26 --version | head -n1)"; yes_or_no " Continue?") make cleanall make EMACS=emacs-27 $* test || (echo "Failed to run tests using $(emacs-27 --version | head -n1)"; yes_or_no " Continue?") make cleanall make EMACS=emacs-28-vcs $* test || (echo "Failed to run tests using $(emacs-28-vcs --version | head -n1)"; yes_or_no " Continue?") make cleanall make EMACS=emacs-29-vcs $* test || (echo "Failed to run tests using $(emacs-29-vcs --version | head -n1)"; yes_or_no " Continue?") make cleanall LANG="C" make EMACS=emacs-29-vcs $* test || (echo "Failed to run tests using LANG=C $(emacs-29-vcs --version | head -n1)"; yes_or_no " Continue?") make cleanall LANG="de_DE.UTF-8" make EMACS=emacs-29-vcs $* test || echo "Failed to run tests using LANG=de_DE.UTF-8 $(emacs-29-vcs --version | head -n1)" # Testing emacs repo:1 ends here #+end_src :END: *** What to do in case of error? #+begin_quote 17 unexpected results: FAILED test-org-clock/clocktable/compact FAILED test-org-clock/clocktable/extend-today-until ... FAILED test-org/string-width #+end_quote **** Manual testing Org mode uses ERT testing library built into Emacs. 1. Open ~cd org-mode; make cleanall; make autoloads; emacs -Q -L ./lisp -l org~ - *It is important _not_ to load personal config* Tests are using certain assumptions about Org settings 2. Open ~org-mode/testing/org-test.el~ and M-x eval-buffer 3. Open ~org-mode/test/lisp/test-with-failing-test-name.el~ and M-x eval-buffer 4. M-x ert <RET> failed-test-name <RET> - For all tests, use M-x ert <RET> t <RET> 5. For more fine-grained testing, can as well use C-x C-e (eval-last-sexp) on "should" forms inside the test *Keep in mind that some test failures (especially related to asynchronous code like font-locking) may not be reproducible in interactive Emacs* **** ~git bisect~ 1. Go to magit status buffer 2. ~magit-bisect~ B -> start (B) -> this revision is erroneous <RET> -> some recent working rev (e.g. main~20) 3. From terminal #+begin_src bash make cleanall; make BTEST_RE="^test-org-colview/columns-move-left" test # BTEST_RE limits the number of checked tests to what you specify. # There is no need to re-run all the tests again and again. #+end_src 4. From magit status buffer: - If error is still there, ~magit-bisect~ B -> bad (B) - No error, ~magit-bisect~ B -> good (g) ** Creating patches and sending them to Org mailing list With Magit, it is pretty much trivial: 1. Make sure that you are using the latest Org mode version - ~git-fetch~ F -> origin (usually "u") 2. Do the changes in org-mode code 3. Test them (as above) 4. From magit status buffer: stage all (S) -> commit (c) -> commit (c) 5. Write detailed commit message (see below) 6. Create the patch by ~magit-patch~ W -> create (c) -> create (c) -> <RET> (will create patches from all new commits) 7. Write and email to emacs-orgmode@gnu.org - Subject shoud start with [PATCH] =[PATCH] org.el: Refactor function= - Tell why your patch should be merged in the body - Attach your patch file - The patch will soon (5-15 min) appear at https://updates.orgmode.org/ *Note that part of the above steps can be automated with https://git.sr.ht/~yoctocell/git-email, but I do not (yet, [2022-04-23 Sat]) recommend it as it is too early in development and has bugs)* *** Writing commit messages You need to follow specific format detailed in https://orgmode.org/worg/org-contribute.html#commit-messages Briefly: - We generally put the main library or file that is affected by patch at the beginning ~org-element.el Fix headline caching~ - The commit body should details which files and functions have been changed and what exactly has been changed #+begin_quote - org-timer.el (org-timer-cancel-timer, org-timer-stop): Enhance message. (org-timer-set-timer): Use the number of minutes in the Effort property as the default timer value. Three prefix arguments will ignore the Effort value property. #+end_quote - *There is no need to list the function/file names manually here* - In magit commit message buffer, there is a diff window beside - From the diff window, pressing "c" on a chunk will insert the changed function name as needed - Use double space to separate sentences - Quote =`function-or-symbol-name'= for easier automatic analysis - It is a good practice to reference the related discussion in the mailing list #+begin_quote See discussion in https://list.orgmode.org/orgmode/87levyzwsk.fsf@localhost/ #+end_quote - *If you haven't signed copyright assignment with FSF, put* #+begin_quote TINYCHANGE #+end_quote at the end of the commit message - _Beware that patches >15LOC require FSF copyright assignment_ *** Copyright assignment with FSF To contribute significant (>15LOC) patches, we have a legal requirement that you transfer the copyright to FSF. All the details in https://orgmode.org/worg/org-contribute.html#copyright - It is generally fairly easy unless you employer has weird policies - Just send https://orgmode.org/request-assign-future.txt form to assign@gnu.org - They usually reply within short time - If there is no reply 1 month, feel free to contact Org mailing list to assist