PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 12th of November, 2012

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:27:06] <Bilge> I have an embedded document called emails.address and I have an index on emails.address but when I do db.User.find({ 'emails.address': "an@example.com" }).explain() I find that "indexOnly" is false
[00:27:31] <Bilge> Why isn't my index being used?
[00:28:08] <Bilge> It is a single key index just on emails.address
[00:33:18] <crudson> Bilge: what does db.User.find({ 'emails.address': "an@example.com" }).explain().cursor say?
[00:34:41] <crudson> also, see https://jira.mongodb.org/browse/SERVER-5759
[00:49:00] <Bilge> crudson: BtreeCursor emails_address_1
[00:49:39] <Bilge> I don't know what that means
[00:49:48] <Bilge> Doesn't that mean it is using my index?
[00:49:49] <crudson> it's using the index
[00:50:00] <Bilge> So I've just come across that bug?
[00:50:37] <crudson> indexOnly needs to work better, but .cursor is showing correctly that it's being used
[00:50:58] <Bilge> OK thanks :)
[01:02:05] <deface> evening.
[01:11:40] <therealkoopa> I'm struggling with the $match on an aggregation. If I put { $match: { 'people.inactive' : false, _id: id }}, it doesn't return anything if there is a document with all inactive people, but I want it to always return the document, but have the people filtered.
[03:51:10] <mrpro> when are collection locks coming?
[03:54:09] <ckd> mrpro: 2.4, apparently?
[03:54:18] <mrpro> i couldnt find jira
[03:54:19] <mrpro> u got?
[03:55:01] <ckd> mrpro: https://jira.mongodb.org/browse/SERVER-1240 it's mentioned somewhere near the top that it's scheduled for 2.4, but i'm looking for more concrete info
[03:56:45] <ckd> mrpro: i should walk that back, http://www.mongodb.org/display/DOCS/How+does+concurrency+work mentions it's not necessarily going into 2.4
[03:59:33] <mrpro> sucks
[04:00:19] <mrpro> i see a shitton of locks
[04:00:43] <ckd> is it impractical for you to try that suggestion about 2.2 and 1 collection per db?
[04:00:51] <ckd> it's ugly, but….
[04:01:08] <mrpro> thats lame
[04:01:17] <mrpro> but yea its an obvious fix :)
[05:24:03] <mrpro> Mongo is funny
[05:24:08] <mrpro> it spits out slow transactions into log
[05:24:22] <mrpro> but writing to log causes more IO thus slowing down even more incoming transactions
[05:24:29] <mrpro> thus making it log even more… WTF
[05:26:27] <ChrisPartridge> logception?
[05:33:07] <mrpro> a wut?
[05:33:21] <mrpro> ohhh
[05:33:21] <mrpro> hehe
[07:57:39] <unknet> Hi
[07:58:34] <unknet> How can i consult a subdocument with mongoid?
[08:06:33] <samurai2> Hi there, If I use compound keys as my shard key and I use query only using one of them, will that query will be done in parallel? thanks :)
[08:42:09] <[AD]Turbo> hola
[09:25:01] <Lujeni> Hello - query with the findOne operator works well. However , when i use *find* operator, mongo don't return the cursor ( collection = ~200docs ). Someone have an idea pls?
[09:25:35] <NodeX> err
[09:25:44] <NodeX> findOne() returns a document. find() returns a cursort
[09:25:46] <NodeX> -t
[09:28:49] <unknet> somebody know how the hell is used elematch in mongoid?
[09:29:06] <Lujeni> NodeX, ok. so do you have an idea why the cursor is not return ?
[09:37:35] <NodeX> perhaps it didn't find anything Lujeni ?
[09:42:37] <Lujeni> NodeX, my collection store 200 documents and the shell don't return *the hand*
[09:50:36] <NodeX> Lujeni : I don't know what "the hand" is sorry
[09:54:19] <NodeX> no pm's sorry
[09:54:32] <ron> NodeX: you don't have pms?
[09:54:43] <NodeX> this month I do
[09:55:36] <ron> :D
[09:58:48] <ppetermann> "pm".. in my time that was called a query.
[09:59:08] <Derick> I've always called a PM
[09:59:13] <Derick> ppetermann: octomore, really?! :-)
[09:59:20] <ron> Derick: maybe you're not old enough ;)
[09:59:36] <ron> I used to call it a msg.
[09:59:45] <Derick> ron: I'm probably older :P
[09:59:52] <ron> Derick: thank what? ;)
[10:00:21] <Derick> you
[10:01:03] <ron> well, that's a possibility.
[10:02:42] <IAD> =)
[10:06:10] <NodeX> ppetermann : it is a query lol
[10:06:21] <NodeX> ron was joking about "pms" (pre menstral stress)
[10:07:17] <unknet> how can i query embedded documents using mongoid?
[10:07:57] <NodeX> unknet : mostly drivers use dot notation
[10:08:08] <unknet> NodeX, I'm trying to use it
[10:08:12] <unknet> but always return nil
[10:08:16] <unknet> :(
[10:08:32] <NodeX> https://groups.google.com/forum/?fromgroups=#!topic/mongoid/yPbKMjkBDoc
[10:09:40] <unknet> reading...
[10:09:55] <bigmeow> hi all
[10:10:06] <bigmeow> why you guys use mongodb?
[10:10:31] <bigmeow> why mongodb use boost lib rather than standard c++ lib?
[10:10:31] <Derick> hmm
[10:10:41] <Derick> boost makes porting a lot easier
[10:10:48] <bigmeow> Derick: lol, powerful man appeared:)
[10:11:15] <Derick> heh - I work on the PHP driver
[10:11:22] <bigmeow> Derick: porting, what is that, is that you can use mongodb on more systems?
[10:11:25] <Derick> yes
[10:11:29] <Derick> I guess Windows...
[10:11:43] <Derick> it has a lot of really nice features that make things like socket work a lot easier
[10:11:53] <bigmeow> Derick: it is hell to try to build boost on centos 5.3
[10:12:06] <Derick> doesn't it come in a package?
[10:12:11] <bigmeow> Derick: pure c is nice:)
[10:12:23] <bigmeow> Derick: sqlite is easy to port:)
[10:12:37] <NodeX> use sqllite tehn :)
[10:12:39] <Derick> well, yes
[10:12:55] <Derick> but have you ever used sqlite in production for concurrent updates? :-)
[10:13:05] <bigmeow> Derick: there is package, but i wanna build it from source:)
[10:13:09] <Derick> (I have, it wasn't pretty)
[10:13:16] <Derick> why build it from source when you don't have to?
[10:13:43] <bigmeow> Derick: just kidding, the main reason is that i am not root on that machine, i have to build it from source:( what a hell, it's painful:(
[10:13:43] <NodeX> coz that;'s what the cool kids do
[10:15:09] <bigmeow> NodeX: dude, i am not kid:( i am oldman:)
[10:15:17] <Derick> bigmeow: I thought RPM allowed for package relocations for this purpose?
[10:15:45] <Derick> bigmeow: what you can do, is download the RPM source and compile that - it should have all the required patches in there
[10:15:46] <bigmeow> Derick: can i insatll rpm to my home directory?
[10:16:00] <Derick> bigmeow: I believe you can - but I have no RPM-based system to test here
[10:16:22] <bigmeow> Derick: i have only tried to install rpm using rpm -ivh xx.rpm, and it will install that package to place where root priviledge is needed:(
[10:16:38] <NodeX> bigmeow : dude, I am not a surfer LOL :P
[10:17:12] <bigmeow> Derick: dude, do you use debian? then how to install mongodb to my home directory on debian?
[10:17:36] <Derick> bigmeow: just download the binary off mongodb.org/downloads
[10:18:01] <Derick> Download the .tgz, unpack, and run ./mongod (path might be different)
[10:18:20] <bigmeow> NodeX: i am that suffered poor guy:(
[10:18:51] <bigmeow> Derick: what is the format of mongodb file?
[10:19:02] <bigmeow> Derick: seems it is kind of binary file
[10:19:05] <NodeX> no root sucks
[10:19:21] <Derick> bigmeow: which file exactly?
[10:19:29] <bigmeow> Derick: and each database maybe stored in one file, where i can find the format of that file?
[10:20:25] <Derick> I don't think we have a documented file format
[10:21:00] <Derick> http://www.slideshare.net/mdirolf/inside-mongodb-the-internals-of-an-opensource-database is probably your best bet
[10:21:58] <bigmeow> Derick: how long have you being using boost lib, dude?
[10:22:07] <Derick> I've not used it
[10:22:12] <Derick> Just compiled it once or twice
[10:33:46] <bigmeow> Derick: so you are not a mongodb developer, dude:(
[10:33:59] <Derick> I'n ont a dude :P
[10:34:06] <Derick> I work for 10gen on the MongoDB PHP driver
[10:34:36] <roxlu> hi mongodb!
[10:34:48] <Derick> hi!
[10:35:29] <roxlu> I've been asking on the google groups and in this channel for a way to set meta data for files, when using gridfs and the c-driver. When using buffer writes it seems there is no way to add meta data? (when doing a normal gridfs insert you can pass an bson object as meta data).
[10:42:10] <bigmeow> roxlu: hi world:)
[10:42:23] <roxlu> hey : )
[10:42:45] <bigmeow> roxlu: which google groups do you choose to ask mongodb questions?
[10:43:31] <roxlu> https://groups.google.com/forum/?fromgroups=#!searchin/mongodb-user/meta/mongodb-user/k7S9goWVKq4/CjZZIaZufvUJ
[10:51:43] <nico__> Hi, I'm using pymongo and I'm testing the ReplicaSetConnection with read_preference. However with a small python script which lists a collection, the find() seems to be frozen at the end of the loop. If I run the same script with Connection (Single Node) instead of ReplicaSetConnection, the find() terminates normally...
[11:05:51] <bigmeow> Derick: dude, what is this? http://www.slideshare.net/pengwynn/mongodb-ruby-document-store-that-doesnt-rhyme-with-ouch
[11:06:10] <bigmeow> Derick: why it is kinds of zip files, it's not a pdf document:( why?
[11:06:18] <Derick> I don't run slideshare
[11:46:25] <Gargoyle> Derick: Last night when I updated my blog, I am running RC1 ad home (and now RC2) on the server and the result from aggregate() was different. Is that something specific to the aggregate() function or results in general?
[11:47:24] <Derick> we changed aggregate slightly
[11:47:28] <Derick> (since beta2)
[11:47:46] <Derick> you know have an extra sub-key "results"
[11:52:04] <rickogden> hi all, is it possible to store geospatial areas in MongoDB?
[11:52:35] <NodeX> yup
[11:52:53] <NodeX> http://www.mongodb.org/display/DOCS/Geospatial+Indexing
[11:54:17] <Gargoyle> Derick: Thanks. Just wanted to check another project wasn't going to need a bunch of updates to other queries.
[11:54:42] <rickogden> thanks NodeX
[12:00:11] <NodeX> :)
[12:08:38] <Derick> rickogden: I have shown you this :P
[12:09:07] <rickogden> I wasn't sure about polygons as opposed to points
[12:09:10] <Derick> rickogden: http://derickrethans.nl/indexing-free-tags.html might be of interest
[12:09:14] <Derick> ah... polygons, no
[12:09:16] <Derick> you can't store those
[12:09:25] <Derick> only use them to find points in the db that are in that polygon
[12:09:30] <rickogden> ah
[12:09:43] <rickogden> sorry, that's what I meant by areas
[12:10:40] <Derick> rickogden: there are tricks though :-)
[12:11:13] <rickogden> oh ok
[12:11:50] <rickogden> what I'm attempting to do is store some data from OSM which include a combination of nodes and buildings
[12:12:17] <Derick> yes, I do that too
[12:12:28] <Derick> for the buildings, I just store each node of the building as an array
[12:12:38] <Derick> (and the centre point)
[12:12:59] <rickogden> how do you go about calculating the centre point?
[12:13:18] <Derick> just an average of all the nodes that make up the polygon
[12:13:25] <rickogden> oh ok
[12:13:50] <rickogden> and then are you able to use that in a $near query?
[12:14:07] <rickogden> I guess you can just use the centre point for that
[12:14:26] <Derick> yeah
[12:14:37] <Derick> but the nodes that make up the polygon are good enough as well
[12:14:53] <rickogden> true, but would that return the same document multiple times?
[12:15:54] <Derick> http://derickrethans.nl/files/dump/mongo-osm-overlay.jpg
[12:15:57] <Derick> rickogden: no, just once
[12:16:02] <rickogden> oh good :)
[12:18:28] <rickogden> ooh while I'm here, are there any recommend practices for tracking revision history of a MongoDB document?
[12:18:58] <ron> we just store a new version for each document.
[12:19:33] <ron> but had to implement a wonky locking mechanism for it.
[12:19:44] <rickogden> and only return the latest version?
[12:19:50] <ron> yeah
[12:20:05] <rickogden> that could work
[12:20:29] <rickogden> could reference by the OSM ID rather than document ID
[12:21:05] <ron> there are no references in mongo.
[12:23:39] <rickogden> ron: yeah, I meant for my own queries
[12:24:03] <ron> rickogden: you can do whatever you want :)
[12:25:11] <ron> whoops
[15:01:42] <therealkoopa> Is it possible to use an aggregation filter subdocuments based on position? I'd like to find a document that has an array of subdocuments, but only return subdocuments that are before one of the subdocument's id.
[15:03:20] <ron> well, define 'before' :)
[15:04:34] <Derick> therealkoopa: https://jira.mongodb.org/browse/SERVER-6074 like that?
[15:05:56] <therealkoopa> I don't think I need slice. I would like something "All subdocuments that are before the subdocument with an id of 10", but I odn't know if that's possible
[15:06:34] <Derick> ah, hmm, I don't think so
[15:06:59] <Derick> what you probably can do, is $unwind, filter out id < 10, and then group again on the $unwinded tag
[15:07:32] <therealkoopa> Does the concept of less than make sense with object ids in mongo?
[15:09:52] <therealkoopa> This seems to work: db.histories.aggregate( { $unwind: '$changesets' }, {$match: {"changesets._id": {"$gte": ObjectId("50a10a77fe69c6dee5000007")} }} , { $project: {changesets: 1}})
[15:11:06] <wereHamster> therealkoopa: yes
[15:34:53] <ckd> Derick: RC2 tested and deployed!
[15:38:16] <ron> everyone does.
[15:38:57] <NodeX> hanging one of my apps and it's getting annoying
[15:41:39] <therealkoopa> I'm able to get my query to work within the mongo command line, but the node mongo driver doesn't return results when I have the match in there: { $match: { 'changesets._id': { $gte: mongoose.Schema.ObjectId("50a110dafe69c6dee500000c") }}}. Any ideas how I can figure out why the match doesn't return anything from within the node mongo driver?
[15:54:47] <Derick> ckd: yay - let me know how it goes!
[16:21:11] <alexnixon> hi all. I have two collections, c1 and c2, in the same database which have almost identical documents, except in c1 all documents have a field "x" which is not present in c2. I would like to copy the field "f" from each document in c1 into the corresponding document in c2. I have written the following: https://www.refheap.com/paste/6579 however it's incredibly slow - some updates are taking upwards of 30 seconds. Does anyone have ideas for how I could a
[16:22:05] <kali> your question has been trucated by your irc client...
[16:22:30] <alexnixon> bah. I'll post separate sentences.
[16:22:42] <alexnixon> I have two collections, c1 and c2, in the same database which have almost identical documents, except in c1 all documents have a field "x" which is not present in c2
[16:22:48] <alexnixon> I would like to copy the field "f" from each document in c1 into the corresponding document in c2
[16:22:52] <alexnixon> I have written the following: https://www.refheap.com/paste/6579 however it's incredibly slow - some updates are taking upwards of 30 seconds
[16:22:57] <alexnixon> Does anyone have ideas for how I could achieve the same effect, but more quickly?
[16:23:26] <kali> alexnixon: do you have an index on c2.title ?
[16:24:09] <kchodorow> do they have the same _ids?
[16:24:24] <alexnixon> there is an index on title, yes. the _id's are different
[16:24:49] <kchodorow> db.c1.find().forEach(function(olddoc) { db.entitiessold.update({title:olddoc.title}, {$set:{f:olddoc.f}}); }
[16:25:05] <alexnixon> (oops - replace entitiesold with c2 there)
[16:25:18] <kali> kchodorow: with a multi, i think
[16:25:30] <kchodorow> @kali: good call
[16:25:40] <kali> kchodorow: nope, he does a findOne
[16:25:41] <kchodorow> db.c1.find().forEach(function(olddoc) { db.entitiessold.update({title:olddoc.title}, {$set:{f:olddoc.f}}, false, true); }
[16:25:42] <alexnixon> thanks - can you explain why that would have better performance?
[16:26:01] <kchodorow> @kali, ah, true
[16:26:20] <kali> kchodorow: doesn't hurt :)
[16:26:22] <kchodorow> alexnixon: you're doing an extra findOne and rewriting the whole doc each time (instead of jut adding a field)
[16:26:45] <kali> i'm adraid that will only be marginally faster
[16:26:48] <kali> afraid
[16:26:57] <kchodorow> yeah, it probably still won't be super fast, because it has to grow every doc in your collection
[16:27:47] <kali> i can't see how an update can take 30s. you're sure you have an index on c2.title ?
[16:27:59] <alexnixon> yeah I've read about the 'shuffling' of docs when you grow them, and suspected that's what I'm running in to
[16:28:58] <alexnixon> yes there's definitely an index, and the collection has around 4 million rows
[16:29:09] <alexnixon> s/rows/documents :-)
[16:29:54] <alexnixon> it's not all inserts that are taking that long - it's just that when I hammer the console with db.currentOp(), I often see long running inserts
[16:31:59] <alexnixon> is there a general solution to the "collection migration" problem - i.e. you want update a whole bunch of documents before the universe expires?
[16:32:51] <alexnixon> I've hit this problem before and wasn't too sure what to do. Perhaps a mongodump/mongorestore, with some flag telling mongo to leave some extra padding for each doc when importing?
[16:35:22] <kali> alexnixon: no general solutions, more like evasive action, here
[16:39:36] <kchodorow> alexnixon: you could try compacting the collection first, you can tell compact to leave padding on each document (see paddingFactor: http://docs.mongodb.org/manual/reference/command/compact/)
[16:39:49] <kchodorow> but then you're just trading update time for compaction time
[16:41:52] <neiz> anyone famliar with 10gen c# driver?
[16:42:07] <ron> don't ask meta questions. ask your question.
[16:43:11] <alexnixon> kchodorow: thanks for that - it could well be quicker than waiting for the updates...worth a try at least
[16:43:46] <kchodorow> alexnixon: compact does block everything else from running, so don't do it on a live system
[16:44:44] <alexnixon> kchodorow: cool that's not a problem for me right now as I'm doing this manipulation off-line, and then will dump/restore to the live system when it's finished.
[16:46:52] <neiz> I am updating a collection by using Collection.Update.PushWrapped("Item", item) to push my object in as an array. Once inserted as an array, how do I access individual elements? (10gen c# driver)
[17:14:35] <hays> is it fair to think of mongodb as just a giant hash table
[17:14:50] <hays> or json object
[17:15:04] <Derick> we have secondary indexes
[17:16:16] <kali> hays: not really, it has some knowledge of what is in the document, so you can have indexes, server side filtering, selective retrieval, atomic updates
[17:18:24] <hays> hmm ok
[17:22:55] <NodeX> and some all round juicyness
[17:23:25] <hays> i am debating whether i want to dive into mongodb or just stick with sql
[17:23:38] <NodeX> all down to your app
[17:25:45] <doxavore> Can you not count({_id: { $lt: some_object_id }})?
[17:26:17] <doxavore> I get the full count even when I swap $lt for $gt :-/
[17:36:13] <doxavore> never mind - i'm an idiot. forgot to add coll.count(:query => my_opts) in my code. hooray mongodb.
[18:00:49] <totic> What is the best way to do Enums in mongodb?
[19:07:32] <wereHamster> totic: numbers, strings, ..
[19:08:21] <totic> together they would do that…
[19:08:37] <srp_> Hello, I have this BSON in my database.. http://paste.kde.org/604988/ I want to update this by removing {"50a13bec21966423837a02cd" : 0 } . how do i do this? I'm a newbie and tried this(http://paste.kde.org/605000/) query but i think its wrong! i'd be glad if someone helped me! :)
[19:09:02] <totic> wereHamster: if you want to save space you dont use strings.. if you want to make it human readable you dont use numbers .. the combo is enums
[19:09:10] <totic> saved as numbers, represented as strings
[19:09:24] <wereHamster> totic: great. So what was yer question again?
[19:09:54] <totic> if there were enums in mongo
[19:10:02] <totic> but I found it, its called choices
[19:10:09] <wereHamster> srp_: maybe you want $pull
[19:10:16] <totic> weeb1e_: http://mongoengine-odm.readthedocs.org/en/latest/upgrade.html#choice-options
[19:10:21] <totic> sorry*
[19:10:26] <totic> wereHamster: http://mongoengine-odm.readthedocs.org/en/latest/upgrade.html#choice-options
[19:10:46] <NodeX> that's not in mongo
[19:10:50] <NodeX> that's in your driver
[19:11:02] <wereHamster> totic: well, that is specific to your ODM. Next time mention what language/driver/mapper you are using
[19:11:20] <totic> sorry, I am using Python
[19:11:27] <wereHamster> now i don't care anymore.
[19:12:34] <totic> :(
[19:13:07] <srp_> wereHamster: I tried it.. not getting the query right :( this is what i tried.. db.sim.update({"type" : "note"}, {"similarity" : {"$pull" : [{"50a13bec21966423837a02cd": 1}]}}) but it said "uncaught exception: field names cannot start with $ [$pull]"
[19:14:06] <wereHamster> srp_: read the $pull syntax
[19:14:24] <srp_> wereHamster: ok.. just a min.. :)
[19:16:50] <srp_> wereHamster: ah.. saw it.. i understood that i cannot use $pull here because i cannot know the value in advance. I only know the key in the example i showed! how do i do that?
[19:17:50] <srp_> wereHamster: is there a wildcard in mongodb which matches anything in the place of value.. ?
[19:18:38] <srp_> like.. db.sim.update({"type" : "note"}, {"similarity" : {"$pull" : {"50a13bec21966423837a02cd": WILDCARD_HERE}}})
[19:19:27] <wereHamster> but that only works on strings
[19:19:31] <wereHamster> //
[19:20:55] <srp_> wereHamster: no problem.. thats fine.. i can store those numbers as strings as i have very less number of these floating numbers .. how can i use wildcard on strings?
[19:21:56] <wereHamster> I'm just gussing here. regex (// is an empty regex)
[19:22:25] <srp_> wereHamster: ok.. let me try that.. thanks :)
[19:53:03] <fideloper> Hey! Does anyone have any opinions on rethinkdb? Looks interesting.
[19:53:09] <fideloper> Wondering what ya'll thought about it
[19:56:08] <FrenkyNet> fideloper: no php support, aka fail for me
[19:57:07] <fideloper> yeah I noticed that as well - same here
[19:57:25] <fideloper> that's a big segment of web dev to ignore
[20:03:30] <ee99ee> hi... I'm having trouble with mongo and PHP, getting this simple query to work: http://pastebin.com/E51XQ44d
[20:03:40] <ee99ee> anyone have any ideas? it returns the right data at the command line, but not within PHP
[20:03:54] <ee99ee> I think it has something to do with the nested array
[20:06:15] <Gargoyle> ee99ee: Your array looks OK. But I have never used new MongoCollection(). Do you get the same result if you use $this->mongodb->users->find() ?
[20:06:55] <ee99ee> Gargoyle: let me try, 1 sec
[20:07:44] <ee99ee> Gargoyle: same reulst, it just returns null
[20:08:07] <Gargoyle> ee99ee: And if you pass nothing to find() you get a cursor?
[20:08:39] <ee99ee> Gargoyle: yes, I get object(MongoCursor)#24 (0) { }
[20:09:53] <Gargoyle> Very odd that you are getting null!
[20:10:04] <Gargoyle> what versions of things are you running?
[20:10:46] <gsamat> hello!
[20:11:11] <gsamat> is it good idea to build compound index on 10GB production DB?
[20:11:22] <ee99ee> Gargoyle: php 5.3.3
[20:11:23] <wereHamster> depends
[20:11:32] <ee99ee> mongo 1.2.12
[20:11:44] <wereHamster> gsamat: is it a good idea to eat fish?
[20:11:51] <ee99ee> well, that's the driver version I mean
[20:11:58] <gsamat> :) i have mongo 1.8.5
[20:12:12] <ee99ee> mongo server 2.0.7
[20:12:40] <ee99ee> this is centos 6.3... driver that came with it
[20:12:41] <gsamat> I have almost no knowledge in mongo, but I see lots of slow (>50sec) queries
[20:13:00] <gsamat> with same fields being filtered
[20:14:16] <ee99ee> 1.2.12 is the latest version
[20:14:29] <ee99ee> of the PECL driver for PHP
[20:14:35] <ee99ee> Gargoyle: so, looks like I'm up to date
[20:15:01] <Gargoyle> odd!
[20:15:06] <gsamat> can you point me to some docs I should read?
[20:15:14] <gsamat> releated to indexes
[20:15:17] <ee99ee> Gargoyle: fix it!!! :)
[20:16:25] <Derick> ee99ee: instead of: new MongoCollection($this->mongodb, 'users'); you can just do $this->mongodb->users
[20:16:28] <Gargoyle> ee99ee: You are deffo connecting to all the right db's / collections. Not using a test db by mistake (Not that I have ever done that and run myself round in circles for hours!! ;)
[20:16:39] <Derick> also, is $this->mongodb the database, or the collection?
[20:17:11] <Derick> sorry
[20:17:16] <Derick> also, is $this->mongodb the database, or the connection?
[20:17:40] <ee99ee> Derick: yeah, I realize that now... I'm changing my code to that... but I still have the same issue going on
[20:17:48] <ee99ee> Gargoyle: nope, other queries are working fine
[20:18:00] <ee99ee> I just can't do nested queries for some reason
[20:18:36] <wereHamster> gsamat: google.coim, enter 'mongodb index' into the search field, then press the button
[20:18:40] <Derick> no extra space on either side of abc123?
[20:18:44] <trtz> fideloper: every database other than MongoDB is better
[20:18:45] <Derick> do a var_dump on $query please
[20:18:45] <ee99ee> nope
[20:18:56] <gsamat> wereHamster, thank you!
[20:19:32] <ee99ee> Derick: http://grab.by/htam
[20:22:55] <ee99ee> @Derick: what do you think?
[20:24:20] <Derick> can you do a var_dump of $collection_users too ?
[20:24:44] <ee99ee> @Derick: http://grab.by/htb4
[20:24:52] <ee99ee> looks normal to me
[20:25:02] <Derick> yes
[20:25:14] <Derick> are you talking to the correct database?
[20:25:27] <Derick> - i have had typoes in it that caused issues
[20:25:33] <ee99ee> yes, absolutely sure... if I just do find() then I see data
[20:25:47] <ee99ee> there is only one record in that collection
[20:26:24] <ckd> ee99ee: in PHP, do you get results with the query being empty?
[20:26:25] <Gargoyle> ee99ee: Can you pastebin the output from the shell for db.users.findOne() or is it sensitive data?
[20:26:30] <Derick> ee99ee: can you dump the output of your "findOne() instead of find()" ?
[20:26:42] <ee99ee> ckd: yes, I do
[20:27:43] <ee99ee> @Derick: findOne(): http://grab.by/htbs and find(): http://grab.by/htbu
[20:27:51] <ee99ee> Gargoyle: sure, 1 sec
[20:28:01] <Derick> findOne is empty...
[20:29:19] <ee99ee> Gargoyle: here's the data: http://pastebin.com/xk6GchUi
[20:29:31] <ee99ee> Gargoyle: I sanitized it a bit to remove my email etc, but that's all
[20:29:57] <ckd> ah
[20:30:07] <Derick> ee99ee: findone in php was empty
[20:30:09] <ckd> access_token is inside an array iside
[20:30:16] <ckd> inside sessions
[20:30:40] <Gargoyle> ee99ee: In your screenshot for Derick you are passing a real session id, in that pastebin its "abc123"
[20:31:05] <Gargoyle> sorry, access token
[20:31:11] <Derick> ee99ee: the output of findOne() should *not* be empty if you have data in your database and collection
[20:31:40] <Derick> ee99ee: can you show how you connect and obtain $this->mongodb as well?
[20:31:55] <ee99ee> gd it
[20:31:59] <ee99ee> I'm a moron
[20:32:17] <ee99ee> Gargoyle: thank you
[20:32:31] <ee99ee> Gargoyle: I am an idiot
[20:32:39] <ee99ee> I had wired up some stuff to pastebin
[20:32:46] <ee99ee> sorry guys
[20:33:11] <Nicolas_Leonidas> Hi, I want to move the docs from one collection to another, this is how I'm doing it
[20:33:12] <Nicolas_Leonidas> db.smalldaily.find().forEach(function(doc){db.largedaily.save(doc);db.smalldaily.remove(doc);});
[20:33:46] <Nicolas_Leonidas> This changes the "_id" attribute of docs, correct? is there a way to preserve the id that rows have i n smalldaily, when they are moved to largedaily?
[20:35:06] <Gargoyle> I would have thought the _id would be preserved. Mongo only creates one if there isn't one in the doc you are saving.
[20:35:10] <Derick> why should it change the _id?
[20:35:12] <Derick> it's part of doc
[20:36:25] <Gargoyle> Guess wha Derick ?
[20:36:30] <Derick> hmm?
[20:36:41] <Derick> don't say you found another bug ;-)
[20:36:44] <Gargoyle> I've got another segfault!
[20:36:56] <ron> \o/
[20:37:05] <ron> Derick: you should give him a prize!
[20:37:13] <Gargoyle> On code that had been running for about two weeks - just started segfaulting this weekend.
[20:37:35] <Derick> Gargoyle: you know the drill :-)
[20:38:10] <Gargoyle> It's in a loop, so it's got to be data related and not code.
[20:38:20] <Nicolas_Leonidas> Derick: so it doesn't change?
[20:39:24] <Nicolas_Leonidas> is there a place I can turn an ObjectId into human readable values just to see what exactly is the date and stuff?
[20:43:37] <Gargoyle> Derick: odd! :/
[20:44:34] <Gargoyle> Derick: client server 1 segfaulted 3 times on the run. I realised this isn't the one I installed valgrind on, so switched to the other box - and the script runs fine!
[20:45:06] <Gargoyle> going to see if I can spot a difference - both servers should be identical.
[20:45:27] <Derick> Gargoyle: export USE_ZEND_ALLOC=0 to make debugging easier
[21:04:17] <Gargoyle> Derick: http://pastie.org/private/2rbvejp25li8ojnxa2x0w
[21:04:50] <Gargoyle> Thats the difference between the two servers. They are running 1.3.0RC1.
[21:04:50] <Derick> Error 503 Service Unavailable
[21:04:56] <Derick> Gargoyle: not RC2?
[21:05:20] <Derick> anyway, no differences at all
[21:05:36] <Derick> how easy is it for you to reproduce it with one go?
[21:05:43] <Gargoyle> very.
[21:06:00] <Gargoyle> Is that link working now? I think pastie had a hicup?
[21:06:18] <Derick> Easy enough for you to create a one-script-one-request solution that includes the data in the the script?
[21:06:22] <Derick> yes, the pastie works now
[21:07:22] <Gargoyle> Errr. Possibly. I can run my script isolated to the one set of records. Might take me a bit longer to dig up the single doc.
[21:08:06] <Gargoyle> But by the logic of those diffs, if I install the debug stuff on the other server, it'll work! :S
[21:08:20] <Derick> what's "the debug stuff" ?
[21:08:51] <Gargoyle> The packages at the top. gdb, libc6-dbg, libpython2.7 and valgrind
[21:09:51] <Derick> oh, that should not have any effect
[21:12:39] <roxlu> When I haven an object id, how can I use the shell to print it's value?
[21:13:19] <Nicolas_Leonidas> Hi, I'm trying to use find based on two conditions
[21:13:21] <Nicolas_Leonidas> http://pastebin.com/yECKv3VK
[21:13:29] <Nicolas_Leonidas> but it doesn't return anything, am I doing this wrong?
[21:15:50] <Derick> Nicolas_Leonidas: or or and query?
[21:16:35] <Nicolas_Leonidas> and!
[21:16:42] <roxlu> Ah got i: db.images.files.find( { "_id" : ObjectId("50A162D07AA35C1300000271") } ).toArray()
[21:17:07] <Derick> Nicolas_Leonidas: without knowing what your variables are, I've no idea what to say
[21:18:02] <Nicolas_Leonidas> Derick: stard and end are dates, lid is an integer
[21:18:17] <Nicolas_Leonidas> I wanna find documents in that date range with that lid
[21:19:21] <Derick> Nicolas_Leonidas: var_dump them
[21:26:37] <Nicolas_Leonidas> Derick: var_dump what?
[21:26:40] <Nicolas_Leonidas> the results?
[21:26:50] <Nicolas_Leonidas> it's empty I know the problem is with the query I'm sending to mongodb
[21:28:25] <Derick> no, the variables that you're putting in your query
[21:30:13] <Nicolas_Leonidas> ok, I json_encoded them too {"$and":{"date":{"$gt":"2010-01-01","$lte":"2010-01-02"},"lid":6209}}
[21:31:00] <Nicolas_Leonidas> is this wrong?
[21:31:28] <Derick> why are you json encoding anything?
[21:31:46] <Derick> are you sure "lid" is stored as a number and not a string in MongoDB?
[21:32:06] <Derick> and no need for the $and (you didn't have that in your first example!) as that is defauly
[21:32:09] <Nicolas_Leonidas> it is a number yes
[21:32:09] <Derick> default*
[21:32:15] <Derick> Nicolas_Leonidas: please double check that
[21:32:16] <Nicolas_Leonidas> I json encoded so that I can type it in mongo shell
[21:32:30] <Nicolas_Leonidas> Derick: just did, yeah it's a number, it's not qouted
[21:32:45] <Derick> does it work without the date part?
[21:32:56] <Gargoyle> Derick: This is an odd one!
[21:33:05] <Derick> Gargoyle: i'm sure :-/
[21:33:28] <Gargoyle> Derick: I have isolated it to two iterations of a loop and pulled out the code.
[21:33:30] <roxlu> hi guys, when I'm using gridfs, and have a collection: "images.files", how can I update the meta data when I've got an oid ?
[21:33:32] <Gargoyle> Derick: http://pastie.org/private/ak89ld8pfvwydgzakhbg
[21:33:36] <Nicolas_Leonidas> when I do this in shell => db.largedaily.find({"$and":{"date":{"$gt":"2010-01-01","$lte":"2010-01-02"},"lid":6209}});
[21:33:45] <Nicolas_Leonidas> it says error: { "$err" : "$and expression must be a nonempty array", "code" : 14816 }
[21:34:33] <Derick> Nicolas_Leonidas: $and wants an array ([]) not an object ({})
[21:34:46] <Gargoyle> Derick: Line 30 is the one segfaulting. But ONLY when the other commands have run. If I remove lines 3 to 29 which is essentially what the previous loop iteration is doing, it does not crash.
[21:35:19] <Derick> Gargoyle: and what does gdb's "bt full" say?
[21:35:31] <Derick> Gargoyle: also, did you run this with the env var "USE_ZEND_ALLOC=0" ?
[21:35:37] <Gargoyle> Not put gdb onto this box yet.
[21:35:43] <Derick> ok
[21:36:00] <Derick> try the env var anyway, as zend's allocator hides weird stuff
[21:36:19] <Gargoyle> Even without gdb?
[21:36:23] <Derick> yes
[21:36:56] <Gargoyle> No segfault!
[21:37:21] <Derick> time to use valgrind I think
[21:37:40] <Gargoyle> the other server hasn't been rebooted since we were last debugging, so USE_ZEND_ALLOC is still exported on the working machine.
[21:37:57] <Derick> Gargoyle: I'm getting tired here thoug,h need some down time - are you around tomorrow morning UTC?
[21:38:08] <Gargoyle> So I should be able to get this to crash on the other machine by setting that var to 1?
[21:38:15] <Derick> try it :-)
[21:38:22] <Gargoyle> Derick: Yeah. Normally get online about 8
[21:38:25] <Derick> probably not, but you never know...
[21:38:35] <Derick> Gargoyle: ok, a bit after that - need to fix some super secret stuff first :)
[21:38:42] <Gargoyle> ha ha! :)
[22:04:32] <Nicolas_Leonidas> Derick: I asked my question with more details here: http://stackoverflow.com/questions/13352302/mongodb-search-using-a-date-range-and-another-condition
[22:05:56] <Derick> $searchCriteria = array('$and' =>
[22:05:56] <Derick> array(
[22:05:56] <Derick> 'date' => array('$gt' => $start, '$lte' => $end),
[22:05:56] <Derick> 'lid' => $lid
[22:05:57] <Derick> ));
[22:06:00] <Derick> is wrong
[22:06:10] <Derick> I already said you don't need the $and:
[22:06:36] <Derick> $searchCriteria =
[22:06:36] <Derick> array(
[22:06:36] <Derick> 'date' => array('$gt' => $start, '$lte' => $end),
[22:06:36] <Derick> 'lid' => $lid
[22:06:39] <Derick> );
[22:06:41] <Derick> is enough
[22:07:21] <Derick> Nicolas_Leonidas: I dunno, all the answers on SO I've already answerred here...
[22:09:38] <Nicolas_Leonidas> Derick: that actually works, but returns no results let me make sure there is data for that
[22:10:41] <Nicolas_Leonidas> Derick: yah man it works, there was no data in db for that lid
[22:10:55] <Derick> Answerred it on SO too
[22:11:29] <Nicolas_Leonidas> Derick: thank you very much
[22:11:38] <Derick> I'm off to bed now, g/night
[22:38:27] <navu> Hi Everyone
[22:39:42] <navu> I'm quite new to mongodb and wanted to find out some information. I'm planning to deploy a mongodb project to Azure and would like to take advantage of the replica sets
[22:57:30] <timhaines> Guys - a newb question. When does it make sense to put many collections inside one DB, and when should collections have their own DBs? I read the write lock is now set at DB level. Does this mean it's beneficial for each collection to have their own db?
[23:10:31] <Freso> What's the "definitive guide" book for MongoDB? I see both O'Reilly and Apress having one.
[23:13:56] <jin_> Hey guys, I have a question regarding setting up replicasets. Currently, I'm setting up the mongod instances on three separate VMs using a chef recipe. After it is setup, one of the VM executes a recipe that initializes it by itself, then gathers the other nodes' fqdn to reconfigure the replicaset. The first node has a configuration file setup as it should, but the other nodes are not initializing, although they are being contacted by
[23:14:06] <jin_> I would appreciate any pointers.
[23:40:02] <ckd> Freso: Kristina is hard at work on a second edition of her O'Reilly book
[23:40:55] <ckd> timhaines: that's an excellent example of when it's great to have one collection per DB
[23:41:37] <timhaines> ckd: Is there any benefit to having more than one collection in a db?
[23:42:22] <Freso> ckd: It probably won't be done by Nov. 24th though. :)
[23:43:07] <ckd> Freso: Probably not :) but hers is def a great book, but factor in that both books are already a bit out of date
[23:43:19] <ckd> Freso: but the basic concepts are def there
[23:44:37] <Freso> ckd: Yeah. I'm compiling a list of books for my birthday if anyone wants to make a donation for a project I'm working on (which will eventually utilise MongoDB).
[23:44:57] <Freso> ckd: "list of books", read: Amazon wishlist
[23:45:00] <Freso> :p
[23:45:29] <ckd> timhaines: that's a question for somebody more competent than myself… believe it or not, for what I use Mongo for, I don't use map reduce or aggregation, so i don't' know if either of those are limited in cross-database stuff