xpra icon
Bug tracker and wiki

This bug tracker and wiki are being discontinued
please use https://github.com/Xpra-org/xpra instead.

Custom Query (2683 matches)


Show under each result:

Results (22 - 24 of 2683)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Ticket Resolution Summary Owner Reporter
#22 worksforme forwarding system tray Antoine Martin pmarek

For even more complete integration into the client session it would be nice to get the dbus socket forwarded, too.

I don't know much about dbus ... but IIRC this would mean that the registered applications would have to be stored, so that on re-connection the dbus login can be re-done.

It would be too much to hope for automatic dbus reconnect in each application, although it would surely be the cleanest way if libdbus just did all that.

Furthermore there could be a way to intercept "exec" calls ... there are quite a few programs that just call other programs, eg. when clicking a link a call to the firefox executable is made. But if this application runs via xpra, but firefox is on the client, this fails - unless there's some easy way to forward these calls, too.

(Perhaps it would be enough to have a xpra option, like "xpra run-on-client <cmdline>" for this - either the call can be configured in the application, or a shell script in a well-chosen PATH could do the forward call. There should be some mechanism for that in xpra, as the exec forwarding via ssh is not that easy to configure ...)

#23 worksforme fast refresh (ie: fast scrolling) causes high cpu usage and no screen updates Antoine Martin Antoine Martin

Not sure why the screen does not update (I don't get that behaviour here), but in any case we need to deal with fast updates better.

Maybe we can detect when a window is causing bursts of damage requests and buffer them for a bit, by the time we get around to processing them we may be able to remove many duplicates for the same region, or if the total surface of the damage requests is close to the full window size then just do one full refresh instead.

The damage requests from the server end come via _contents_changed in server.py (from CompositeHelper and BaseWindowModel.setup()). Currently this fires _protocol.source_has_more() which will write the packet if the write queue is empty. Could it be that very large packets make the write queue appear empty (it is) when in fact there is still a large chunk of data still to be sent in _write_thread_loop? Or do the packets go out faster than client can deal with?

More testing needed. Ideas and suggestions welcome.

#24 fixed network read loop is highly inefficient Antoine Martin Antoine Martin

Whenever we read a bit of data from the socket (sometimes as small as just one character!) we schedule the main loop to call _handle_read. When it actually fires there may only be (sometimes less than) one real packet waiting there, yet it will fire as many times as the socket read loop originally fired. We want to ensure we don't schedule it again if it is already pending, this should save a lot of context switches and reduce load significantly. Using an atomic loop counter should probably be enough to achieve this.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Note: See TracQuery for help on using queries.