c# - Using async sockets on Windows Server 2008 R2 causes 100% CPU usage -
i have generic c# socket server uses asynchronous methods of socket classes - beginaccept(), beginreceive(), etc. server has been working great last 4 years @ many customer sites running win server 2003. installed on windows server 2008 r2 server, 64-bit. looks fine until first client connects , issues beginreceive() , beginaccept() call in accept handler. when happens, cpu usage spikes 100% , stays way until close listening socket.
not sure matters, server running in virtual machine.
have done lot of testing, nothing seems help. using process explorer, can see 2 threads spun shortly after beginreceive()/beginaccept() calls, , ones consuming processor. unfortunately, not able reproduce problem on win7 64-bit workstation.
i have done lot of research, , have found far following 2 kb articles imply server 2008 r2 may have issue tcp/ip components, available hot fixes: kb2465772 , kb2477730. reluctant have customer install them until more fix issue.
has else had problem? if so, did have resolve issue?
here method believe causes situation:
private void acceptcallback(iasyncresult result) { connectioninfo connection = new connectioninfo(); try { // finish accept. socket listener = (socket)result.asyncstate; connection.socket = listener.endaccept(result); connection.request = new stringbuilder(256); // start receive , new accept. connection.socket.beginreceive(connection.buffer, 0, connection.buffer.length, socketflags.none, new asynccallback(receivecallback), connection); _serversocket.beginaccept(new asynccallback(acceptcallback), listener); // cpu usage spikes @ 100% shortly after this... } catch (objectdisposedexception /*ode*/) { _log.debug("[acceptcallback] objectdisposedexception"); } catch (socketexception se) { connection.socket.close(); _log.errorformat("[acceptcallback] socket exception ({0}: {1} {2}", connection.clientaddress, se.errorcode, se.message); } catch (exception ex) { connection.socket.close(); _log.errorformat("[acceptcallback] exception {0}: {1}", connection.clientaddress, ex.message); } }
the issue caused having more 1 call beginaccept() when setting listener socket. not know why problem occurs on 64bit servers, changing code shown below fixed issue.
original code:
public setupserversocket() { ipendpoint myendpoint = new ipendpoint(ipaddress.any, _port); // create socket, bind it, , start listening. _serversocket = new socket(myendpoint.address.addressfamily, sockettype.stream, protocoltype.tcp); _serversocket.bind(myendpoint); _serversocket.listen((int)socketoptionname.maxconnections); (int = 0; < 10; i++) { _serversocket.beginaccept(new asynccallback(acceptcallback), _serversocket); } }
to following:
public setupserversocket() { ipendpoint myendpoint = new ipendpoint(ipaddress.any, _port); // create socket, bind it, , start listening. _serversocket = new socket(myendpoint.address.addressfamily, sockettype.stream, protocoltype.tcp); _serversocket.bind(myendpoint); _serversocket.listen((int)socketoptionname.maxconnections); //for (int = 0; < 10; i++) { _serversocket.beginaccept(new asynccallback(acceptcallback), _serversocket); //} }
Comments
Post a Comment