PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 14th of May, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:38:40] <Tug> when adding a relation to 2 services, events "joined" and "changed" are not synchronized between the 2 services. For instance my hooks execute in that order :
[00:38:40] <Tug> service1-relation-joined > service1-relation-changed > service2-relation-joined > service2-relation-changed, but what I was expecting would be to have both "joined" hooks completed before running "changed" hooks.
[00:39:04] <Tug> (wry wrong window)
[02:30:20] <maybeno> Hey all, anyone recently try and install an older version of mongodb (2.4.10) via the redhat RPMS? I've been trying to install the older version via the documentation but it only sees the latest.
[02:49:45] <in_deep_thought> is there any way to update a document based on whats in the callback? like this: http://bpaste.net/show/wzs0wDHTvMmjmMe3EEJi/
[03:00:53] <Reflow> i have a database containing 5mill~ rows
[03:01:01] <Reflow> but its taking too long to execute a query
[03:01:09] <Reflow> while i had the exact same data in a sql server database
[03:01:12] <Reflow> and it took no time for queries
[03:03:32] <Reflow> hello
[03:37:21] <maybeno> depends on what you have indexed
[03:37:34] <maybeno> and what exactly your are querying
[03:41:06] <tonph> hi guys, am new in this. Just need a help. I am trying to cpickle and then compressed an google doc feed object and store in mongo db. But its giving me error how can i store the compressed string
[03:43:56] <tonph> any one has any idea on how to store lz4 compresed string in mongodb
[03:54:28] <maybeno> make to sure you are doing cpickle correctly before compression maybe.. http://sergeykarayev.com/storing_numpy_array_in_mongo/ ?
[04:30:50] <joannac> I'm banning kfb4 for connection unstableness
[04:41:33] <tonph> Hi, anyone has tried to store lz4 compressed dumps of string to mongodb
[05:03:11] <s2013> can you dl mongo db using sudo apt-get on ubuntu ?
[05:03:24] <s2013> i tried following the official instructions but i get an error "cannot execute binary file"
[05:04:46] <m3lol_> I'm having trouble updating multiple documents
[05:06:37] <m3lol_> anyone awake? :)
[05:06:43] <federated_life1> no
[05:07:05] <m3lol_> can I spam the room with some data from my collections?
[05:07:13] <federated_life1> paste bin…
[05:07:39] <m3lol_> standby
[05:07:46] <federated_life1> s2013: paste bin your error, make sure you have a data dir set
[05:08:20] <federated_life1> m3lol_: did you set multi : true ?
[05:08:28] <s2013> im reinstalling it. let me see how that works
[05:08:29] <m3lol_> yes
[05:08:57] <federated_life1> s2013: you might be on a 32 bit system and install 64 bit binaries…
[05:09:11] <s2013> how do i check what bit i am in?
[05:09:24] <s2013> im on wondows 64bit but using ubuntu vmware i thought i was on 64bit
[05:09:28] <m3lol_> http://pastebin.com/SqW68NWA
[05:09:52] <m3lol_> why come my query's only updating 1 document? :(
[05:13:53] <s2013> hmm i think mongo is working fine now but my path directory is messed up
[05:14:04] <s2013> how do i make sure whe i type mongo its the correct one
[05:14:07] <federated_life1> m3lol_: only 1 or none ?
[05:14:14] <s2013> i had it set to the old directory that i deleted
[05:15:07] <federated_life1> s2013: thats why theres a config file, you can db.serverStatus() for details
[05:16:07] <federated_life1> m3lol_: you could try using $in instead of a regex //
[05:17:51] <s2013> huh?
[05:17:53] <s2013> i cant get into mongo
[05:18:07] <s2013> when i type mongo in terminal it looks for old file path thats no longer there
[05:18:50] <federated_life1> s2013: pastebin ps -ef | grep mongo
[05:21:01] <s2013> mongodb 21089 1 0 01:11 ? 00:00:02 /usr/bin/mongod --config /etc/mongod.conf
[05:21:28] <federated_life1> s2013: cat /etc/mongodb.conf
[05:21:37] <s2013> k then
[05:22:09] <federated_life1> probably you dont have a bind ip set up…so, you have to connect to the ip… mongo ip:27017
[05:22:09] <s2013> its my PATH variable thats fucked up
[05:22:14] <s2013> no man
[05:22:20] <s2013> i have my path set to the wrong directory
[05:22:36] <s2013> mongo works fine its just if i type "mongo" it looks for it in the wrong directory
[05:23:07] <federated_life1> shell> which mongo
[05:23:18] <s2013> yes, its pointed to the wrong directory
[05:23:20] <s2013> like i said
[05:23:25] <federated_life1> why did you do that?
[05:23:27] <federated_life1> ;)
[05:23:36] <s2013> because i followed mongodbs official instruction for linux
[05:23:55] <s2013> but that didnt work so i went and insatlled it using apt-get
[05:28:02] <maybeno> did you do the one for ubuntu?
[05:29:57] <s2013> maybeno, yes. it works, its just that my path is fucked up
[05:32:23] <m3lol_> only 1 updates
[05:32:29] <m3lol_> but i want all of them to update
[05:33:14] <maybeno> look at your mongod.conf file in /etc/mongod.conf
[05:33:23] <maybeno> look for the dbpath variable
[05:33:31] <maybeno> you may need to change that
[05:33:44] <m3lol_> db.mod_legend.update( {$in: {mod_key: 1} }, {$set: {mod_name: '3234232342342342342344'}}, {multi: true} );
[05:33:48] <m3lol_> is that correct?
[05:34:11] <m3lol_> god damn possum crawled into my room
[05:34:15] <m3lol_> sorry, got distracted :)
[05:38:13] <federated_life1> m3lol_: no
[05:38:30] <federated_life1> $in : [ "a1","b1",etc]
[05:38:40] <federated_life1> it takes an array, use the square brackets for an array
[05:39:08] <federated_life1> http://docs.mongodb.org/manual/reference/operator/query/in/
[05:39:25] <federated_life1> and sersly bro, use gogle
[05:53:49] <m3lol_> but then i have to specify each key
[05:54:00] <m3lol_> i just want to update any document with the key that has "1" in it
[05:54:22] <m3lol_> which was the purpose of the regex
[05:54:30] <m3lol_> which the document said it would work, but doesn't
[06:10:25] <joannac> m3lol_: I just tried it, it worked for me...
[06:10:49] <m3lol_> can you give me your sample collection and commands?
[06:11:01] <joannac> I basically copied and pasted from your pastebin
[06:11:31] <joannac> http://pastebin.com/122gtskH
[06:12:47] <m3lol_> can you paste a .stats() on your foo collection?
[06:13:22] <joannac> http://pastebin.com/h6WCsukZ
[06:13:42] <joannac> not sure what you're looking for there...
[06:22:06] <kozer> Hi to everyone! I have this query: db. points.find({ geo: { '$geoWithin': { '$geometry': { type: 'Polygon', coordinates: [ [ [ 8.3056640625, 46.830133640447386 ], [ 35.57373046875, 46.830133640447386 ], [ 35.57373046875, 35.28150065789119 ], [ 8.3056640625, 35.28150065789119 ], [ 8.3056640625, 46.830133640447386 ] ] ] } } } },{price:true}).sort({ price: -1 }).limit(500). What is the right index for this? I tried some indexes : {price:-1},{geo:'
[06:22:07] <kozer> 2dsphere'},{geo:'2dsphere',price:-1},{price:-1,geo:'2dsphere'} and {price:-1,"geo.coordinates":'2dsphere'}, but the query is still slow.Only when i use hint with "price_-1" the query time drops to 18ms.When mongodb choose for me the query time is 1766ms!
[06:23:51] <kozer> Derick: Thanks a lot about yesterday, your help was tremendous!
[06:24:15] <m3lol_> joannac: i created a new collection, dumped the data into that collection, and ran that same update....still just one document updated
[06:30:30] <m3lol_> http://pastebin.com/Ww7EjqBG
[06:39:31] <m3lol_> fuarrrkkkk
[06:39:38] <m3lol_> db.version(); 2.0.4 :(
[06:39:44] <m3lol_> so many hours :(
[07:01:56] <m3lol_> great, just upraded to 2.6, now i can't connect
[07:16:56] <Gr1> Greetings everyone.
[07:18:50] <Gr1> I have a working replica set of mongodb, with 5 boxes
[07:19:06] <Gr1> And out of all these 5 boxes with similar configuration and similar set of data,
[07:19:16] <Gr1> only one box is showing faults
[07:19:22] <Gr1> and slow queries
[07:19:35] <Gr1> Any idea what is causing the slow queries and faults?
[07:19:45] <Gr1> Index size are the same across all boxes
[07:19:50] <Gr1> and are replicated
[07:59:32] <sikor_sxe> hello, what would a possible reason to store documents in several collections?
[07:59:35] <sikor_sxe> performance?
[08:12:10] <kali> clarity and ease of use
[08:12:34] <kali> mostly
[08:12:39] <kali> performance, marginally
[08:22:49] <Zelest> how do I deal with "PHP Fatal error: Uncaught exception 'MongoConnectionException' with message 'No candidate servers found'" ?
[08:22:57] <Zelest> the only way I seem to be able to solve that is to restart php-fpm
[08:23:05] <Zelest> the mongo nodes are all working and a master is available and all
[08:23:18] <kees_> but you recently restarted a mongo?
[08:23:24] <kees_> or switched a master?
[08:23:25] <Zelest> nope
[08:23:40] <Zelest> it seems to stay like that until I restart php-fpm
[08:24:24] <kees_> i've seen that before, it stays till you've reused every connection in your pool (which means that restarting php-fpm solves it as well)
[08:24:43] <Zelest> ah
[08:24:47] <Zelest> any solution for this?
[08:24:59] <kees_> yes.. restart php-fpm :+
[08:25:00] <Zelest> ah, seems like I'm running the 1.3.4 driver.. might be worth upgrading it before complaining..
[08:25:24] <kees_> nah seriously; i only see it on my low-traffic testsites, so i usually restart apache to fix it
[08:25:51] <Zelest> i see this on our advertisement server which has millions of requests a day :/
[08:26:14] <kees_> hm
[08:26:43] <Zelest> i remember there was some flag for this in php-ini a year or so ago..
[08:26:53] <Zelest> like, how long before mongo "polls" to see the master
[08:26:53] <kees_> but yeah; upgrade the mongo driver first
[08:26:57] <Zelest> yeah
[08:28:34] <kees_> good call btw, i seem to be behind on upgrades as well :P
[08:28:43] <Zelest> hehe
[08:29:07] <Zelest> i love how i restore b0rked slaves :D
[08:29:27] <Zelest> service mongodb stop && cd /path/to/db-dir && rm -rf * && service mongodb start
[08:29:28] <Zelest> :D
[08:29:33] <kees_> hehe, true
[08:29:50] <Zelest> seems to get back to sync a lot faster than trying to "keep up" with the master
[08:29:58] <Zelest> or, not always.. but often
[08:30:58] <kees_> i'm running in a bit of an unusual setup, so that's how i start the master
[08:32:04] <kees_> master is running in /run/shm so if i reboot the server the data is gone anyway
[09:07:33] <Derick> Zelest: hmm?
[09:09:06] <Zelest> regarding "No candidate servers available"
[09:09:10] <Zelest> when no master is available
[09:09:19] <Zelest> only way I seem to solve that is to restart php-fpm
[09:09:27] <Zelest> what's the "correct" way of solving it?
[09:39:50] <lucsx> Hi
[09:40:13] <lucsx> I can't understand why mongodb isn't starting the http stats server, I made sure to have nohttpinterface=false (didn't work with it commented either)
[09:45:27] <louisrr> did you try it with it set to true?
[09:46:54] <lucsx> louisrr, set to true means that it shouldn't start
[09:47:08] <louisrr> yea
[09:47:39] <louisrr> anybody know why my save statement in PHP won't work? http://pastebin.com/3tsV7rqT
[10:18:37] <arussel> is there a way using shell to use a variable as property name ? so var foo = "bar", db.xx.insert({foo: "y"}) would result in {"bar":"y"} ? at the moment I'm getting {"foo":"y"}
[10:20:54] <arussel> found it var x = {}; x[foo] = "bar"; db.xx.insert(x)
[10:33:41] <louisrr> yea I was going to say it should be like a regular javascript statement where you use var x as an array name
[13:15:38] <martinrd> What user privilege/role is needed to list all collections in a database?
[13:39:24] <vaxxon> I have a large number of zip files, and intend to have them unzipped, and the contents of each zip file recorded by its name and md5 hash. Given an md5 hash, I want to be able to search an entire mongo document and get the name of the zip file it was contained in. Would I be able to declare md5 to be an index? This structure work okay, or is there a more efficient way to do this?
[13:39:30] <vaxxon> http://pastie.org/private/zzezssmkfwncyx7zi9mefq
[13:39:56] <thewiz> how can I log a subdocument of a find to a variable?
[13:40:23] <thewiz> i run a shell.db.User.findOne({ username: options.username }, function (err, user) { where options.username is the entry
[13:40:47] <thewiz> that searches for that user entry, but every user has a subdocument called 'creator'
[13:41:20] <thewiz> the real ID of the user is stored in there, how can I store contents of 'creator' in a variable?
[13:44:53] <saml> hey, how can I find if field foobar is an empty array?
[13:45:10] <saml> db.collection.find({foobar: []}) ?
[13:48:23] <betty> Can anyone tell me if it is possible to use findAndModify to change the type of a field? I need to change a field from int32 to String, if it exists and has a value.
[14:08:00] <q851> I'm having an issue with replication on one of my clusters. I have 3 replica sets (no sharding), three config servers (on the same servers as replica sets), and a number of mongos instances.
[14:08:41] <q851> 01 - current primary, 02 - secondary (behind 3800 secs), 03 - secondary (up to date)
[14:09:17] <q851> I have hidden 02 but it is still falling behind on replication.
[14:09:31] <q851> This cluster is running 2.2.5.
[14:10:50] <q851> This isn't the first time it has fallen behind. It seems to do this a bit. Can someone help me diagnose the problem?
[14:18:34] <thewiz> if anyone has the time I would really appreciate it http://stackoverflow.com/questions/23657254/mongodb-mongoose-storing-subdocument-of-a-variable-to-a-variable
[14:37:57] <TylerE> I have a weird case...I have some Python code using Pymongo that runs an upsert..... on my dev box (windows) everything works fine. On the linux server the upsert is totally failing (no row is updated or inserted and PyMongo returns None
[14:40:11] <q851> Any DBAs in this channel?
[15:23:18] <sweb> is there any fast insert for mongo like transactions in MySQL
[15:23:29] <sweb> i need to use upsert with $inc
[15:23:43] <sweb> but not fast than batchInsert
[16:48:53] <kagaro> anyone else get undefined method `new_record?' on an instance of Mongoid::Criteria when saving a document with both embeds and has_many in it?
[16:50:37] <zylthinking> hi, any one knows whether the mongodb c++ driver can be used for both 2.4 & 2.6
[16:52:23] <zylthinking> hi, any one knows whether the mongodb c++ driver can be used for both 2.4 & 2.6? what if a 2.6 feature sent to 2.4 server?
[17:13:29] <diogogmt> is it possible to apply a projection after a group in the aggregation pipeline?
[17:13:39] <diogogmt> my query is returning empty if i apply the projection on it
[17:19:39] <mlg9000> Hi.. I'm having some issues with a geo-redundant replica set, it seems when I lose one site (still have a quorum) I can use mongodb...
[17:19:48] <mlg9000> can't*
[17:20:31] <mlg9000> I have to manually eject the nodes from the replica set
[17:20:50] <mlg9000> anyone seen this?
[18:26:51] <subz3r0> hi
[18:28:02] <subz3r0> atm im searching a solution to fill a database per remote with text, pictures and statistics(graphs) which should work at the end like a presentation and will be shown on a screen
[18:28:13] <subz3r0> would that be possible with mongodb?
[18:29:27] <subz3r0> hardware would be a raspberry pi
[18:30:18] <subz3r0> someting like that: server sends data -> raspberry pi --> tv screen
[18:31:13] <subz3r0> the data should be repeated on the screen until the db gets new text, pictures and/or statistics
[18:43:57] <qutwala_> Hey guys, anyone here?
[18:45:07] <qutwala_> Does mongodb hadoop driver speed MP process? ( http://docs.mongodb.org/ecosystem/tutorial/getting-started-with-hadoop/ )
[18:46:18] <qutwala_> I'm stuck in situation on MP with 6kk data which takes ~4minutes it's takes too much time, i need speed up somehow.
[18:46:44] <qutwala_> And there's no sharding available..
[18:47:43] <qutwala_> I have already splited all these data per collections with 1024 * 1024 * 32 size
[18:48:02] <qutwala_> Please, any suggestions?
[18:51:27] <cheeser> does it speed up what now?
[18:54:00] <qutwala_> cheeser, i need to improve somehow MP performance. Does hadoop can improve performace MP? I know it uses multihreaded MP`s.
[19:05:47] <qutwala_> anyone, please?
[19:17:52] <cheeser> MP?
[19:17:57] <qutwala_> map reduce
[19:18:04] <cheeser> oh. MR.
[19:18:13] <qutwala_> oh, sorry
[19:18:34] <cheeser> you can try increasing the number of mappers/reducers.
[19:21:42] <qutwala_> do you mean run these mappers/reducers each as seperate mongo proccess, try to run in seperate thread?
[19:22:12] <cheeser> you're using hadoop, yes?
[19:22:25] <cheeser> on top of mongo? or are you using mongo's built-in mapreduce?
[19:22:27] <qutwala_> at this time no, i'm using only mongodb
[19:22:37] <qutwala_> and it
[19:22:49] <cheeser> oh, i think any claims to multithreading there are mostly wishful thinking.
[19:25:27] <qutwala_> so, if i'll try to use haddop driver for mongodb it should improve performance, right? Because MongoDB does not support hyperthreading or does it?
[19:26:11] <cheeser> i don't know anything about hyperthreading support either way
[19:28:33] <qutwala_> okay, do you mean increace number of MR on hadoop or in MongoDB?
[19:29:16] <cheeser> in hadoop
[19:30:22] <qutwala_> Okay, ill try, thank you.
[19:34:03] <SpeakerToMeat> Hello
[19:34:28] <SpeakerToMeat> Question, is there a good book/article/whitepaper on document based database document schema design?
[19:34:52] <cheeser> here's a start: http://docs.mongodb.org/manual/data-modeling/
[19:36:16] <SpeakerToMeat> thanks
[19:36:24] <SpeakerToMeat> Someth.... Holy fuck
[19:36:27] <SpeakerToMeat> you're here too?
[19:36:45] <SpeakerToMeat> I mean. Thanks cheeser
[19:37:04] <SpeakerToMeat> I'm also looking for several scenarios with "this is how we modeled, and whY"
[19:37:22] <SpeakerToMeat> Ok that guide seems to cover a lot, thanks
[19:37:26] <SpeakerToMeat> I'll read from that
[19:37:49] <SpeakerToMeat> cheeser: Not that you'll remember but I used to be lars_g under your iron rule in ##java
[19:38:00] <cheeser> ha!
[19:38:20] <cheeser> i'm the Visa Card of java: everywhere you want to be. ;)
[19:38:43] <SpeakerToMeat> Meh, I thought Eugene was the only javaer that mingled with the lesser races
[20:22:14] <_automan> Question... does mongo have a built in way to discover other nodes using the same replica set?
[20:33:20] <hephaestus_rg> hello, can anyone suggest a cheap mongo provider
[20:33:39] <hephaestus_rg> right now i'm paying 10$/m for 20gb on a shared plan from mongolab
[20:34:02] <hephaestus_rg> but I have a hard limit, and upgrading does not look cheap
[20:35:59] <SpeakerToMeat> Why not use a vps?
[20:36:10] <SpeakerToMeat> someone like aws or prgmr.com or digital ocean?
[20:37:01] <hephaestus_rg> i don't really want to manage it. all i'd want is a URI i can point my rails app at
[20:48:09] <SpeakerToMeat> Is mongo auth scheme strong enough to do that kind of thing?
[21:10:51] <tmcw> anyone here familiar with the internals of mongo's new s2-based indexes?
[21:17:50] <federated_life> tmcw: wouldnt say they are too new
[21:17:58] <federated_life> but its really google's s2's
[21:18:06] <tmcw> y, 11 months old'ish
[21:18:24] <tmcw> somewhere between bleeding edge and obsolete :)
[21:18:56] <tmcw> i'm wondering about the overlap between this method and mongo's basic indexes - it looks like each s2-based geometry can contribute hundreds of index entries into mongodb
[21:21:42] <tmcw> there also seems to be a bunch of dead code in the implementation, which is kind of puzzling and makes me wonder if the /mongo repository is just part of the equation
[21:36:19] <dberg2> I'm trying to understand some failed updates in a sharded collection. The update fails because no document was found. I'm using the _id in the selector although the collection has a different shard key.
[21:36:50] <dberg2> anything I should look at?
[21:37:02] <dberg2> The default seem to be reads from the primary.
[22:04:20] <devdev> i'm having one hell of a time getting $pullAll to work, and to make matters worse I'm trying to do it thru mongoose (a node js module)
[22:04:51] <devdev> i can't get the example to work: http://docs.mongodb.org/manual/reference/operator/update/pullAll/
[22:05:04] <devdev> anyone have any tips?