Today Is A Good Day To Teleport

E20 releases are gearing up, so it seems like a good time to talk about the state of Wayland compositor-ing that is going into this release. First, let’s talk features:

XDG-Shell: Fully supported; window geometry (CSD) accurately used for window placement/positioning/resistance.

XWayland: Fully supported; any issues that I am currently aware of are only fixable through upstream XWayland changes.

Multi-monitor: Fully supported; this has not undergone testing as extensive as the X11 RandR code, but it has worked for the scenarios that I’ve examined.

Key/Mouse/Edge Bindings: Fully supported.

 

There’s probably some things I’m missing, but that’s the basics. So now the question that everyone wants answers about: what’s missing? For things that the average user will care about, not much.

EFM DnD: Not currently implemented. My choices were to either fully NIH the EFL internal implementation in order to support nested compositors, or to use the EFL implementation and not have nested compositing support–but then later have to go back and NIH it anyway.

No window close animations: This requires implementing the recently-added SHM pool refcounting feature from upstream Wayland in order to avoid crashing randomly.

Restarting Enlightenment kills all applications: There’s a lot of work required, both in upstream Wayland and locally in EFL/Enlightenment, to make applications persist through restarts.

XWayland clients do not smoothly resize: This seems to be an issue in XWayland.

Keyboard layout support NYI: There’s no “standard” method for doing this in Wayland yet AFAIK, so it will either require more NIH work to do this in the compositor or some spec changes in Wayland/*shell to dictate how this should be handled.

 

That sums up most of the details that people have been asking about. I’ll try to answer questions in the comments section and/or follow up with additional posts as necessary.

Development 4 Votes

Given that the Enlightenment 20 release cycle is winding down, a look ahead to what will be done for E21 reveals that there are no urgent projects which will require 6+ months of my time like Wayland and E20.

 

As such, I’ve created this poll for people to vote on the development focus for E21. Anyone can vote, though registration (free) is required.

Desktop Compositing: How Much Overdraw Is Too Much?

I’d like to start this post with a thanks to Phoronix for using “journalism” to write an article using facts. Facts: they used to be commonplace.

 

In my previous post, I talked about the overall methodology with which I implemented Compiz plugin support. Now I’m going to go into some detail about the rendering portion.

 

The first thing to reiterate is that Compiz only performs damage calculations for the entire screen and does not track them per-window. This is, to say the least, problematic. There’s no way to easily predict where a window will draw at any given time either: there’s window effects which cause clients to zoom in/out to/from the mouse cursor, and others which cause the client to bounce around outside of its frame region. The compositor must be prepared to draw things for each window at any geometry on the screen at any time, regardless of common sense clipping.

The first attempt I made at getting things to work was to just get anything to render. This involved a lot of code-surgery inside Compiz–removing overall screen effect drawing, lighting effects, some GL calls which were almost as old as I am, things of this nature. Compiz was allowed to render window contents and only window contents; whenever it had updates to render, the sandbox would be called into and all the effect plugins would get their turns to throw pixels around. Finally, I added a method for overriding the GL surface of a client’s rendered compositor image and stuck the texture into it for testing. Predictably, the client rendered on-screen upside down and without any effects.

The next step was equally predictable: putting the entire Compiz GL surface into the compositor object for rendering. As expected, the effects looked terrible because they were clipped by the client’s geometry. A wobbly window which cannot freely wobble is no wobbly window at all, says I!

 

This posed a bit of a technical problem, however. Evas has no mechanism (that I’m aware of) for saying “this is the image’s geometry, but also draw outside that geometry whenever I tell you to”; all images are clipped to their bounding box. There was only one option: use all the available power of my gpu.

 

Given that the window damages only exist for the entire screen, the easiest choice, and also the only choice that I could think of at the time, was to make an image object the size of the screen for every window, then do fullscreen renders. Always. The image ignored mouse events and was added inside the overall client’s compositor object to preserve stacking, and then it’s just up to the Compiz integration to ensure that the window renders in the “right” place in order to fool the user–the rendered image for the window is, in fact, in no way tethered to or inside of its frame, despite the appearance.

With this done, only artifacts remained. The issue here was that when an effect was drawing outside the boundary of the window geometry, at what point do you clear the screen of the past renders? For example, if a window is animating a slide in from offscreen, how can I determine when to clean up the “effect” renders so that only the normal window content remains? This turned out to be a big blocker: every time I thought I’d figured out a reasonable algorithm for detecting effect draws vs client draws, I’d find a corner case where something didn’t work as expected.

In the end, there were a number of changes that needed to be made. For the specific case of the wobbly plugin, issues arose when releasing a window that had achieved a high velocity, and the window would get stuck in a “wobbled” state. To avert this, the Compiz integration layer actually tells Compiz that the window has stopped being dragged only after all the rendering has completed, then performs a full redraw for that window’s fullscreen image object.

Another case was the animation plugin, where windows zoom in and out of the cursor position. This was more difficult to track down, as it was important to clear out the animation frames in order to have an effect which didn’t look like garbage. In order to solve the issue, I actually ended up tracking the GL drawing vertices for the window on each frame; if they changed between the pre-draw and draw phases, then an effect was mangling things and the entire window should be redrawn. Probably.

Some other smaller hacks were done here and there, but this is the gist of how things work under the hood. The last remaining known issue, aside from lacking capabilities for drawing any of the screen effects, is that compositor object mirrors are broken for windows. This means that the compositor is unable to create copies of window contents for reuse in other places, such as the pager gadget. I’m not entirely certain why this is happening. When passed the bound texture, the image mirrors don’t render anything at all, so attempting to scissor anything for a drawable wasn’t worth my time. Confusing, but I lacked the motivation to really dig into this any more than I already had.

 

I imagine that screen effects would not be terribly difficult to add; another fullscreen image object could be used for the whole thing, and it would probably render okay like that. Something for a rainy day, I suppose.

 

The downside of this method is obvious, since it means that for many frames, the compositor is doing fullscreen GL draws, potentially multiple times. Using a powerful gpu it’s unlikely that anyone will notice much slowdown, but this is obviously not anything suitable for embedded systems. Then again, Compiz uses GL calls from ancient versions of (desktop) OpenGL, so it would need a full API interception layer in order to get this working on an embedded system in any form, or even on Wayland.