Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

3 Neutral

About Azarchius

  • Rank
    Advanced Member

Profile Information

  • Gender
  • Location

Recent Profile Visitors

1501 profile views
  1. I mean yeah. 335a has been stable forever. Like I said in the OP, definitely a 7.x issue as I didn't have it either when I was 4.x
  2. Seems like the aforementioned packets are not sent if the server is interrupted or crashes while running through gdb and the thread itself is not terminated. If I just run it again without closing gdb (i.e. run and when prompted to overwrite previous thread do yes) the issue will not occur.
  3. Hrm, this could be related? Prevented sending of [SMSG_UPDATE_OBJECT 0x280D (10253)] to non existent socket 1 to [Player: Dbdr GUID Full: 0x08000400000000000000000000000001 Type: Player Entry: 0 Low: 1, Account: 1] Prevented sending of [SMSG_LOGOUT_COMPLETE 0x26AF (9903)] to non existent socket 0 to [Player: Account: 1] I'm getting loads of messages like these on server shutdown.
  4. I regret I'm not running RA and SOAP. If the SOAP part is in use, you say? That's pretty interesting, though I can't think of what else would work like it and cause the same issue... By the way thanks a lot for your help so far, I really appreciate it dude. If you can think of anything else, I'd love to hear it. Edit: For the record, the bnetserver behaves in the exact same way. If I end it, I can't get it up for a good minute.
  5. I don't have selinux, and my netstat is clean after shutting down the worldserver and while turning it on again: Only related thing that's running there is the bnetserver. You can see the server booting to the right--it promptly hit the error. Also, yeah, the problem is seemingly that it's not.. giving up the port? But it certainly gave it up. Also, I don't have two worldservers up. No scenario where I can have 'em -- worldserver is always running off the same screen.
  6. Never mind, it appears the issue is immediately back. Strange that it worked at first--I straight up interrupted the process and booted it back immediately. Did it again just now and it's back. Alas. Edit: Found a symptom. It seems like it only happens if a person had logged into the server. Could it be the core is stuck trying to send them disconnect packets? The client *does* gracefully disconnect even if I interrupt the process. I tried logging into the server, logging off, and only then restarting. I actually received an error this time World initialized in 0 minutes 27 seconds StartNetwork failed to bind instance socket acceptor Failed to initialize network /home/stage/core/src/server/shared/Networking/SocketMgr.h:35 in ~SocketMgr ASSERTION FAILED: !_threads && !_acceptor && !_threadCount StopNetwork must be called prior to SocketMgr destruction Segmentation fault [email protected]:~/server/bin$
  7. Ubuntu 16.04. Now that I think about it, I not only updated the core, but the server as well at the time. I was also 14.04, for what it's worth, before the upgrade. It was very old--the latest 434 commit (at least at the time, though IIRC TC isn't updating Cata anymore). Also, held on to this reply for a while since what you said did give me an idea. Indeed, I upgraded all software on the system, including the distro, and then a did a recompile from total scratch. Works like a charm, thanks! Did appear to lose the ability to tunnel into MySQL as root though, but... something to investigate later.
  8. Hrm, but why the firewall? Seems strange. It's only for the next half a minute or so after the initial shutdown. There's also nothing particularly custom about my iptables and such. I am serving two IPs from the same machine, but this is less relevant and the core did work before.
  9. Hi, Every time my server restarts, it needs to wait some 30 seconds or so before it can boot again, I am presented with this error, for some reason TrinityCore isn't gracefully closing the socket. I thought it had something to do with the way I shut down the core, but even with graceful non-force shutdowns, this still happens, but I haven't seen any thread about this (other than some years-old one which wasn't an issue with TC). Is this just something wrong with my server? This wasn't a problem before I upgraded to 7.x. Did some preliminary investigation and couldn't really find anything wrong with my server's network configuration. Using the latest 7.x build. Cheers World initialized in 0 minutes 29 seconds StartNetwork failed to bind socket acceptor Failed to initialize network [email protected]:~/server/bin$
  10. Annoyingly, I've realized my mistake only moments later. I was doing tests to see if I can create a global_strings hotfix table, but due to those tests hotfixes 1, 2, 9999999 etc were all cached in my dbcache.bin. Primitively, clearing the cache was all that was required. And for the curious, no, it seems like the client can receive an invalidation request for a GlobalStrings.db2 row, but it doesn't seem to want to receive a GlobalStrings.adb file. Or, cache it in DBCache.bin, I should say, for version 725. Edit: Actually went and checked DBCache.bin and the hotfix for GlobalStrings was right there. So I don't know why it didn't work, but I suppose it just means there's a reason there isn't a hotfix table for every db2.
  11. Hey there, I've recently updated my core to the latest build, and discovered that the way hotfixes are handled has been revamped. There's now a HotfixCacheID that is being sent to the client. Now, if I've understood correctly, all this really means is that you need to set the ID of the hotfix_data to be equal to that of the server version's hotfix_cache_id. So for this instance, I set it to 1. hotfix.hotfix_data hotfix.char_titles world.version even edited worldserver.conf And yet, I can't seem to edit title id 1 (Private). All this manages to do is make it so I don't have the title in my Titles. What gives? Does the client have an internal hotfix_cache_id? I saw the client request a Creature hotfix that was unrecognized by the core--so it stands to reason that the client believes it is "behind" in hotfixes and knows what to "expect" from hotfixes, but I am not sure how the client knows this. I tried setting the hotfix cache id to 1m to get ahead of a possible internal cache id, but that hadn't worked either. How are hotfixes supposed to be dished out in the latest version?
  12. Thanks man. Though for some reason the guide says that the database and addresses are the only thing that needs changing. I'm furthermore now getting a failed to download cert bundle, but this is doubtlessly talking about the private key and certificate options of the config. I'll search for setup explanations. Edit: A quick Google search revealed the problem to me. For anyone reading, tc_bundle.txt is a critical component. It is generated when patching the client.
  13. Can't connect to my server. Would appreciate assistance. 2/16 06:59:43.644 [IBN_Login] Starting up | hasFrontInterface=false | hasBackInterface=false 2/16 06:59:56.653 [GlueLogin] Starting login | launcherPortal=nullopt | loginPortal=[external ip]:1119 2/16 06:59:56.653 [GlueLogin] Resetting 2/16 06:59:56.653 [IBN_Login] Initializing 2/16 06:59:56.654 [IBN_Login] Attempting logon | host=[external ip] | port=1119 2/16 06:59:56.654 [GlueLogin] Waiting for server response. 2/16 06:59:57.053 [GlueLogin] Waiting for server response. 2/16 06:59:57.138 [GlueLogin] Waiting for server response. 2/16 06:59:58.155 [GlueLogin] Fatal error while logging in | result=( | code=ERROR_HTTP_COULDNT_CONNECT (14001) | localizedMessage= | debugMessage=JSON error: ERROR_HTTP_COULDNT_CONNECT (14001) token: 1) 2/16 06:59:58.198 [IBN_Login] Front disconnecting | connectionId=1 2/16 06:59:58.198 [GlueLogin] Disconnecting from authentication server. 2/16 06:59:58.284 [IBN_Login] Front disconnected | connectionId=1 | result=( | code=ERROR_OK (0) | localizedMessage= | debugMessage=) 2/16 06:59:58.284 [GlueLogin] Disconnected from authentication server. 2/16 06:59:58.284 [IBN_Login] Destroying | isInitialized=true 2/16 06:59:59.662 [IBN_Login] Destroying | isInitialized=false 2/16 07:00:00.560 [IBN_Login] Shutting down 250 [myip:59711] Client called server method ConnectionService.Connect(bgs.protocol.connection.v1. ConnectRequest{ use_bindless_rpc: true }) returned bgs.protocol.connection.v1.ConnectResponse{ server_id { label: 4187 epoch: 1487221187 } server_time: 1487221187507 use_bindless_rpc: true } status 0. 251 [myip:59711] Server called client method ChallengeListener.OnExternalChallenge(bgs.protocol. challenge.v1.ChallengeExternalRequest{ payload_type: "web_auth_url" payload: " bnetserver/login/" }) 252 [myip:59711] Client called server method AuthenticationService.Logon(bgs.protocol. authentication.v1.LogonRequest{ program: "WoW" platform: "Wn64" locale: "enUS" version: "Battle.net Game Service SDK v1.6.4 \"5cf152fa90\"/92 (Jan 17 2017 14:35:22)" application_version: 23420 allow_logon_queue_notifications: true web_client_verification: true device_id: "{ \"RGKY\" : 1158379523, \"CPGE\" : 2290226472, \"ULNG\" : 2169678830, \"SLNG\" : 2169678830, \"CNME\" : 1060965492, \"UNME\" : 3707741108, \"UTCO\" : 1933075630, \"CARC\" : 1007465396, \"CREV\" : 2064042640, \"CLVL\" : 856466825, \"PMEM\" : 1033840135, \"PSZE\" : 994499840, \"OVER\" : 787127494, \"CVRA\" : 191043296, \"CFTC\" : 799610694, \"CFTD\" : 3661751046, \"CEFC\" : 890022063, \"CEFD\" : 2004546342, \"CBRD\" : 115274386, \"CVEN\" : 598815718, \"ANME\" : 1696983174, \"ADSC\" : 3047148086, \"MAC\" : 2921241649 }" }) returned bgs.protocol.NoData{ } status 0. 253 [myip:59711] Server called client method ConnectionService.ForceDisconnect(bgs.protocol. connection.v1.DisconnectNotification{ error_code: 0 }) 254 [myip:59711] Client called server method ConnectionService.RequestDisconnect(bgs.protocol. connection.v1.DisconnectRequest{ error_code: 0 }) status 0. INSERT INTO `realmlist` (`id`, `name`, `address`, `localAddress`, `localSubnetMask`, `port`, `icon`, `flag`, `timezone`, `allowedSecurityLevel`, `population`, `gamebuild`, `Region`, `Battlegroup`) VALUES (1, 'Trinity', 'externalip', '', '', 8085, 0, 0, 1, 0, 0, 23420, 2, 1); Client build is 23420. Using patched client.
  14. You in a quite explicit manner told him to go to 6.x, but never mind. From what I know, the 434 branch is nowhere near the 335 level but is in no way "nigh unplayable."
  15. Yeah. Ariel, I urge you to stick with 434 for at least another few months.
  • Create New...