xpra icon
Bug tracker and wiki

This bug tracker and wiki are being discontinued
please use https://github.com/Xpra-org/xpra instead.


Custom Query (2683 matches)

Filters
 
Or
 
  
 
Columns

Show under each result:


Results (112 - 114 of 2683)

Ticket Resolution Summary Owner Reporter
#149 fixed use XkbKeycodeToKeysym instead of XKeycodeToKeysym (deprecation) Antoine Martin Antoine Martin
Description

We get these warnings:

/usr/include/gtk-2.0/gtk/gtkitemfactory.h:47:1: warning: function declaration isn't a prototype [-Wstrict-prototypes]
bindings.c: In function '__pyx_f_8wimpiggy_8lowlevel_8bindings_KeysymToKeycodes':
bindings.c:10353:7: warning: 'XKeycodeToKeysym' is deprecated (declared at /usr/include/X11/Xlib.h:1695) [-Wdeprecated-declarations]
bindings.c: In function '__pyx_f_8wimpiggy_8lowlevel_8bindings__get_modifier_mappings':
bindings.c:11494:9: warning: 'XKeycodeToKeysym' is deprecated (declared at /usr/include/X11/Xlib.h:1695) [-Wdeprecated-declarations]
bindings.c: In function '__pyx_pf_8wimpiggy_8lowlevel_8bindings_68get_keycodes_down':
bindings.c:12712:5: warning: 'XKeycodeToKeysym' is deprecated (declared at /usr/include/X11/Xlib.h:1695) [-Wdeprecated-declarations]

See freedesktop.org bug 5349 and freedesktop.org bug 25732

Together with the fix in 108#comment:12, this should fix most keyboard related issues?

#152 fixed "xpra --use-display" error-resistance Antoine Martin pmarek
Description

xpra should do all necessary initializations _before_ taking over another display.

I just wanted to replace a hung xpra, and used the same commandline arguments - plus "--use-display". But that got the running session killed:

$ /usr/bin/xpra ...
cannot start - failed to create tcp socket: [Errno 98] The address is already in use
removing socket ...

because I may not use the same "--bind-tcp" address.

  • perhaps the bound socket could be passed, like the Xvfb connection?
  • try to get that socket bound _before_ taking over?
  • In case of an error try to keep the Xvfb running (by starting a child that holds the socket?), to keep the applications alive
#153 fixed xpra over high latency links batches too much Antoine Martin Antoine Martin
Description

As per this mailing list post, it seems that the damage sequence (which we use to see how far behind the client is) is causing the damage batch delay to go up when it is not necessary. This causes the picture latency to go up.

The only alternative explanation is that somehow the "damage-sequence" packet is having an adverse effect, which is very unlikely. To prove this, simply apply this patch to trunk:

--- src/xpra/server_source.py	(revision 975)
+++ src/xpra/server_source.py	(working copy)
@@ -248,7 +248,7 @@
             if self._damage_data_queue.qsize()>3:
                 #contains pixmaps before they get converted to a packet that goes to the damage_packet_queue
                 update_batch_delay("damage data queue overflow: %s" % self._damage_data_queue.qsize(), logp10(self._damage_data_queue.qsize()-2))
-        if not last_delta:
+        if True:
             return
         #figure out how many pixels behind we are, rather than just the number of packets
         all_unprocessed = list(self._damage_packet_sizes)[-delta:]

If the problem goes away, then it is clearly our batching decision making (the part that is based on client feedback) which is wrong, as I suspect it is.


The problem is that although the behaviour is suboptimal over high latency links, I do not know which application to test or what bandwidth limitations to apply to simulate the link in question (2Mbps down 512Bpbs up?).

Using trickle, here is what I tested with:

  • Server:
    xpra --no-daemon --bind-tcp=0.0.0.0:10000 start :10
    
  • An app that will use lots of bandwidth (enlarge the window to use more):
    DISPLAY=:10 glxgears
    
  • The throttled client:
    trickle -d 256 -u 64 -s -L 300 xpra attach tcp:192.168.42.100:10000 --no-mmap
    

We try to converge on the best batch delay possible given the limited bandwidth and it oscillates between 20ms and 400ms, clearly a far too wide range. I think that is because of buffering and "smoothing" over time, which allows lots of packets to go through (and we reduce the batch delay) then when we hit the limit it starts delaying them (and we increase it). Problem is that each of these decisions is delayed, so they it struggles to cope with the changing conditions and keeps oscillating. Slowing down the rate of change would definitely help in this particular case, but not in others... It might be worth taking into account the connection latency to reduce this rate of change.

Thoughts welcome.

Note: See TracQuery for help on using queries.