Skip to content
This repository was archived by the owner on Dec 15, 2022. It is now read-only.

add settings for context lines, show context in match view #847

Merged
merged 10 commits into from
Apr 12, 2017
Merged

add settings for context lines, show context in match view #847

merged 10 commits into from
Apr 12, 2017

Conversation

dirk-thomas
Copy link
Contributor

@dirk-thomas dirk-thomas commented Jan 29, 2017

Requirements

Description of the Change

The patch adds two new options searchContextLineCountBefore and searchContextLineCountAfter which are passed down to scandal to return matches with context lines around the matched line. The match view is being modified that each match does not contain a single li but a whole list which itself contains all context lines before, the matching line, and the context lines after.

To allow the context lines and the match line to be distinguished visually the color of the match line has been updated. Also the way the ".selected" item is being determined is updated to highlight the match line rather then the first context line in the sub list.

Alternate Designs

Other approaches:

In contrast to these this PR does not rely on requiring a TextBuffer for each search result. Instead it uses an extended version of scadal to provide the context directly in the returned match objects.

Benefits

It implements the long desired feature described in #190. The approach requires less resources as the listed other approaches.

Possible Drawbacks

The new settings default to 0 lines of context before and after which is the same behavior as previously. But since the view structure has been updated to accommodate the optional context lines that might affect compatibility. I don't have any experience with Atom views and therefore can't estimate how high the chances for this are.

Applicable Issues / Remaining Todos

  • In the current state when the search results list a file which is not opened yet and you click on the result the match is being updated and looses the context information. This is due to the fact that the search is being repeated in the buffer (instead of using scandal through the workspace API. In order to address this the TextBuffer.scan needs to be extended to support the same new options as in the referenced scandal ticket. Therefore I have prefixed the title with [WIP] for now. This has been address with the PR to the text-buffer package.

  • The current settings don't use a custom order attribute and are therefore in alphabetical order. Since it is a bit confusing that the before option comes after the after option it might be good to add explicit order attributes to all options.

  • Add test coverage / update existing specs as soon as there is a consensus on the selected approach and the feature works.

Related Stuff

The following items would all be nice to have but already a problem in the current state and I consider them orthogonal to this PR. They can be improved / addressed in the future in separate PRs:

  • Correct indentation (multiple whitespace in each line are only rendered as a single space)
  • Multiple results in a single line (and now also overlapping lines of context) could be deduplicated
  • Gaps between code in the same file could be expandable

@dirk-thomas dirk-thomas changed the title [WIP] add settings for context lines, show context in match view add settings for context lines, show context in match view Jan 31, 2017
@dirk-thomas
Copy link
Contributor Author

I removed the "[WIP]" prefix from the title since the feature is "complete" now. While there are still failing specs to address I think it would be good to review this first to ensure there is a consensus on the approach.

@maxbrunsfeld
Copy link
Contributor

@dirk-thomas Thanks for this! Could you provide screenshots or animated GIFs showing what this new UI look like?

One concern: I'm not sure that a config setting is the best way for the user to specify how many context lines they want. I would think that you'd want to vary these numbers on a search-by-search basis. For me personally, I usually wouldn't want any lines of context, but in certain searches, I'd want to see some preceding and/or following context, depending on what I'm looking for. So I would think that we might actually want some kind of UI for specifying the amount of context.

What do you think?

/cc @simurai @atom/feedback

@dirk-thomas
Copy link
Contributor Author

dirk-thomas commented Jan 31, 2017

I kept the UI changes minimal for now so that the PR is easier to review. Please see the current results with 2 lines of context above and below. The matched line is highlighted in white (new to better distinguish the matched lines from the context line) and the matched text is highlighted with a blue box (as it was before).

search-results-with-context

While the PR only adds package settings for the number of context lines the API is designed to provide the arguments for each query. So it should be easy to customize the UI to configure it for every search. I am not familiar with the UI elements in Atom. Maybe someone could suggest how to setup the additional elements beside the existing search options.

For the search results I could see a few enhancements to improvement the view (specific to the new context):

  • reduce spacing between the context lines belonging to a single result (maybe same spacing as lines in a TextEditor?)
  • more separation between search results (currently only uses a 2px padding - same as before), or a empty line, or a horizontal line for visual separation
  • optionally fold / unfold the context (that might be another way to address the use case you mentioned, where you only sometimes need to see the context)

Besides there a other possible improvements but they also apply to the result without the context:

  • render indentation of lines correctly
  • deduplicate results, if multiple matches are found in the same line

@winstliu
Copy link
Contributor

winstliu commented Feb 1, 2017

a horizontal line for visual separation

I like this option the best. Just scanning the screenshot it's very hard to tell where a jump in the file occurs.

@dirk-thomas
Copy link
Contributor Author

@50Wliu I added a second commit (747c91e) which simply modifies the css. The result is definitely easier to "parse" visually:

search-results-with-context -with-sep-line

@simurai
Copy link
Contributor

simurai commented Feb 1, 2017

@maxbrunsfeld I usually wouldn't want any lines of context, but in certain searches, I'd want to see some preceding and/or following context, depending on what I'm looking for.

What about something similar like the "blue expanding button" on github.com. So you could get context for a specific line. Maybe each click increases context by 5 extra lines.

expand

Then there could also be a <input class="input-number" type="number" /> in the header where you can update the config for all lines. So that you don't have to open the package settings every time. Not sure if it needs a separate value for before and after?

number

Scroll position would probably quickly get out of hand.

Anyways, just some ideas. It could also be considered in a follow-up PR to keep this one smaller in scope.

@dirk-thomas
Copy link
Contributor Author

@simurai The advantage of this PR over the other approaches is that it relies on scandal to return the context lines together with each match. That is more efficient because it does not require a text buffer to be opened for every result (consider searching through many large files). But as a consequence you can't just add "more" context since that requires to open the file containing the match again and reading the additional lines.

As you mentioned it could be considered in a follow up PR.

@simurai
Copy link
Contributor

simurai commented Feb 2, 2017

@dirk-thomas requires to open the file containing the match again and reading the additional lines.

Oh, I see. Ok, another naive idea 😇.. what about always get some extra context and then hide it when initially shown? Maybe keep the context in memory or add to the DOM and hide with CSS? Then when expanding it would be instantly available. Of course, this only makes sense if the extra context doesn't significantly slow down search.

@michaeljonathanblack
Copy link

michaeljonathanblack commented Feb 2, 2017

@dirk-thomas The horizontal rules are a little noisy visually.

How about using ellipses in place of a line number instead?

image

I also snuck in some highlighting on matching line numbers, but don't mind me :)

@dirk-thomas
Copy link
Contributor Author

@simurai The optional folding / unfolding of the context was what I tried to suggest earlier.

@mherodev The ellipses will be trivial to implement instead of the horizontal line. I will rather wait until we have a consensus in which direction this should go before updating the PR. Your picture shows well how it could look like.

@dirk-thomas
Copy link
Contributor Author

@maxbrunsfeld Can you please change the label from "work-in-progress" to "needs-review".

@brian-slate
Copy link

Getting very close to feature parity with Sublime!

The only suggestion I have is to collapse overlapping results into a single block (Sublime does this) which will reduce noise. That may require a separate PR but it's worth mentioning. I attached a screenshot of Sublime for reference.

screen shot 2017-02-02 at 11 43 39 am

@dirk-thomas
Copy link
Contributor Author

Since the beta of 1.15 was just released it would great if this long awaited feature could be integrated (after a decision on how it should "look" like has been made) in order to become available in the next beta (1.16).

@lee-dohm
Copy link
Contributor

@dirk-thomas I say go with the horizontal lines and we can iterate on it once it has been merged.

@dirk-thomas
Copy link
Contributor Author

@lee-dohm Sounds good to me. That matches the current state of the patch.

@lee-dohm
Copy link
Contributor

@dirk-thomas Everything is good for final review then?

@dirk-thomas
Copy link
Contributor Author

@lee-dohm I would say yes. If this gets a +1 I would update the specs to match the new html structure if that is ok.

@lee-dohm
Copy link
Contributor

@dirk-thomas you have a go 🚀

@maxbrunsfeld
Copy link
Contributor

maxbrunsfeld commented Mar 3, 2017

But if you consider potentially overlapping matches it becomes ambiguous which match you are referring to.

AFAIK, it's built into the search mechanism that we will never produce overlapping matches, so I don't think this will be an issue for us.

the behavior of pageUp / pageDown has changed. With my PR when scrolling a page down the selected search result was always at the top of the view.

The behavior should be that the selected result stays in the same position within the viewport, not necessarily at the top of the view. This is what happens on master, and I thought that I saw this working fine on my branch after the merge. Let me take another 👀 .

Also the horizontal line separating results is gone now on your new branch. I am not sure if that was intentional or if that is a regression too?

Yeah, I'd actually rather not introduce the horizontal line yet. The reason is that I'm ambivalent as to whether it's visually helpful, and it complicates the implementation a little bit, because in order to get good scrolling performance, we now are responsible for calculating heights and positions of search results manually. Sorry for not mentioning that.

@dirk-thomas
Copy link
Contributor Author

But if you consider potentially overlapping matches it becomes ambiguous which match you are referring to.

AFAIK, it's built into the search mechanism that we will never produce overlapping matches, so I don't think this will be an issue for us.

So it might become a problem in the future. Once the search mechanism is "fixed" 😉 But we can think about that when that is the case...

Also the horizontal line separating results is gone now on your new branch. I am not sure if that was intentional or if that is a regression too?

Yeah, I'd actually rather not introduce the horizontal line yet. The reason is that I'm ambivalent as to whether it's visually helpful, and it complicates the implementation a little bit, because in order to get good scrolling performance, we now are responsible for calculating heights and positions of search results manually. Sorry for not mentioning that.

That is totally fine by me.

@dirk-thomas
Copy link
Contributor Author

the behavior of pageUp / pageDown has changed. With my PR when scrolling a page down the selected search result was always at the top of the view.

The behavior should be that the selected result stays in the same position within the viewport, not necessarily at the top of the view. This is what happens on master, and I thought that I saw this working fine on my branch after the merge. Let me take another 👀 .

The current behavior on your branch scrolls by the clientHeight. That makes sense imo since the user expects exactly to see "all new content" without "missing any lines" from what they saw before.

But regarding the newly selected match: Let's assume that the search results all have the same height so the same number of context lines, which means no matches close to the top or bottom of a file. And further assume that the client height is not close to a multiple of each results height but something in between, e.g. 3.5 result heights. So after scrolling a page down the newly selected result must be either ~ half result height further up or down than the previously selected item. Therefore currently the selection slowly drifts when triggering pageDown repeatedly. It seems if the fractional part of the client height / result height is < 0.5 it drifts in one direction, if is > 0.5 it drifts in the other direction.

I am not sure what to do about it and what a "better" / expected behavior would be. I just think the current behavior sometimes looks "random" to the user and the specs could only check of the new selection is within a certain range from the previous one which is kind of fuzzy...

@maxbrunsfeld
Copy link
Contributor

I am not sure what to do about it and what a "better" / expected behavior would be. I just think the current behavior sometimes looks "random" to the user and the specs could only check of the new selection is within a certain range from the previous one which is kind of fuzzy...

Yeah, I agree; it's not ideal. But the page-down experience was never exactly great; until my most recent PR, adding results to the DOM would sometimes cause noticeable lag and inconsistent scrolling. And the issue you're describing here exists on master.

I guess an ideal solution would be to store off some kind of 'goal offset` for the selected match, so that we could continue to round in the correct direction when paging up and down. That said, I don't think we need to do that as part of this PR.

What would you like to do?

@dirk-thomas
Copy link
Contributor Author

The challenge is that the current behavior is not that easy to test in a spec. I could try to update the specs to check for some kind of valid "range" around there the new selection is supposed to be (probably the old location plus / minus the height of the largest possible result height). If you think this is "good enough" for now we can tackle the scrolling behavior separately if that is already the case on master.

@maxbrunsfeld
Copy link
Contributor

maxbrunsfeld commented Mar 6, 2017

I think that when testing page-up and page-down, we can just set searchContextLineCountBefore and searchContextLineCountAfter to zero, so that the tests work more or less the same as they do on master. Would that solve the issue?

@dirk-thomas
Copy link
Contributor Author

Probably, but then we wouldn't really test that the behavior still does something reasonable with context 😉

@dirk-thomas
Copy link
Contributor Author

dirk-thomas commented Mar 7, 2017

I updated the pageUp / pageDown specs significantly. They check now that after executing the page* command:

  • the newly selected result is not the same as before
  • the vertical offset (position within the visible area) of the newly selected result is close to the previous one
  • the vertical position of the newly selected result if greater (pageDown) / lesser (pageUp)

With this state all specs pass for me locally:

277 tests, 854 assertions, 0 failures, 0 skipped

I assume the CI won't pass since the other PRs haven't been merged and released yet?

@dirk-thomas
Copy link
Contributor Author

I updated the patch in 47dd521 to use the same option names as in atom/text-buffer#195 (comment).

@dirk-thomas
Copy link
Contributor Author

@maxbrunsfeld Is this still being considered for Atom 1.17 beta?

@lee-dohm
Copy link
Contributor

lee-dohm commented Apr 6, 2017 via email

@lee-dohm
Copy link
Contributor

@dirk-thomas Thanks for all your hard work and patience on this. I talked to the team about this PR in the team meeting this morning. What we would need for this to be merged and go out with v1.17:

  1. Resolve the conflicts that now exist
  2. CI to pass - I understand a suggestion was made to make the tests conditionally run based on Atom version

I understand that CI will pass soon enough as the features that this PR depend on make it out to Stable on their own. So we fully intend to integrate this PR, it just won't be with v1.17 without additional work.

Let me know if you have any questions!

@dirk-thomas
Copy link
Contributor Author

@lee-dohm What is the timeline for 1.17? When does this need to be ready in order to be merged in time?

Regarding the compatibility with stable (old API) as well as the upcoming beta (new API): as far as I can see the patch would require conditionals in many many lines / places. Is that really the intention here? I would argue that it makes the code much worse to read / maintain. What is the appropriate condition to check for the Atom version I should use all over the place?

@lee-dohm
Copy link
Contributor

The idea is not to place version checks in the code but in the tests. The condition to check would be atom.getVersion() >= '1.17'. We're planning on cutting v1.17-beta very, very soon.

@maxbrunsfeld
Copy link
Contributor

@dirk-thomas What I had imagined was something like:

if (parseFloat(atom.getVersion()) >= 1.17) {
  describe('when context lines are enabled', () => {

    // tests for the context line behavior

  })
}

I realize that we'd also need some conditional like this when we call editor.scan, because you made a change to the parameters that method takes. Are there other places where conditionals would be required?

@dirk-thomas
Copy link
Contributor Author

@lee-dohm I don't know what exactly "very, very soon" means but probably only a few days? I will try but I can't promise to find the necessary spare time to work on updating this PR today (tomorrow?). If anybody else would like to rebase the PR and add the conditions please feel free to do so. It would be really sad if this great feature would be left out for yet another cycle...

@dirk-thomas
Copy link
Contributor Author

dirk-thomas commented Apr 11, 2017

@lee-dohm @maxbrunsfeld I rebased the patch and used the Atom version to distinguish the signature of the TextEditor.scan function in https://github.com/atom/find-and-replace/pull/847/files#diff-eb24277759b68118e9e07ef629d193e8R250

@lee-dohm lee-dohm merged commit 7ffeecc into atom:master Apr 12, 2017
@lee-dohm
Copy link
Contributor

Thanks for all your hard work and perseverance on this @dirk-thomas!

@dirk-thomas
Copy link
Contributor Author

🚢 Awesome. Looking forward for the upcoming beta!

@cvogt
Copy link

cvogt commented Apr 13, 2017

@dirk-thomas thank you so much, you rock!!

@dirk-thomas dirk-thomas deleted the match_with_context branch May 2, 2017 17:36
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants