[00:20:30] <cheeser> joannac: i am now. back home for the evening.
[01:08:33] <decimal_> could someone do me a huge favor and explain why the following function isn't returning any documents when fired with a proper id? http://pastebin.com/QiiHr6Xx
[01:10:48] <joannac> decimal_: checked the _id exists by running in the mongo shell?
[04:43:42] <Bluefoxicy> you can select a document out with find(), then find({'$in':{'id': $objectidlist}})
[04:44:14] <Bluefoxicy> you can write a map-reduce to do that and just return the whole document with a list of objectids expanded to documents from another collection--that's a join operation.
[04:56:34] <retran> and only evidence provided for these points are cute little graphics and photographs
[04:58:24] <retran> strangely, also seems to imply that the Mongo tutorial regarding TV shows is misleading people into thinking Mongo's is ideal when it's really not
[04:58:44] <retran> the article should really be titled
[04:58:56] <retran> "how i fucked up a Mongo DB design"
[04:59:35] <retran> or maybe "how i got frustrated using a system novel to me"
[05:11:02] <halfamind> Can anyone help me understand why this matches docs where the 2 fields are equal?
[05:24:42] <Bluefoxicy> also that article shows more fundamental misunderstanding.
[05:25:03] <Bluefoxicy> What you want for a social network is some sort of graph based database like node4j or something
[05:25:19] <Bluefoxicy> that shows relationships between things and other things
[05:25:49] <Bluefoxicy> The nodes themselves would carry IDs that may function as _id fields in MongoDB (even maybe being objectid()) or keys in PostgreSQL
[08:43:09] <naillwind> is it somehow possible to create an array before $addToSet if it turns out that the array doesn't already exist in a document? eg. { _id: "abc", data: { items: "" } } db.colleciton.update({ _id: "abc" }, { $addToSet: { data: { friends: [{ friendId: "xyz" }] }}}); <- example doesn't work, but is there something like this that does?
[10:58:37] <arvin_> it also added all the servers that was part of the replicaset
[10:59:39] <arvin_> so i guess the wiki is outdated, you dont need to add all servers but just one server that is part of a set to add the whole replicaSet.
[12:16:12] <Nodex> perhaps add some context to that?
[12:34:29] <runa_> heyas. I have 3.5 MM json docs (in separated files) I want to load in a mongo collection. any hints?
[12:34:58] <runa_> mongoimport --jsonArray says it will load up to 16MB
[12:37:46] <runa_> um. afaik, mongoexport --collection output is just a concatenation of json docs?
[12:40:20] <Nilshar> Hello all, there is something I don't understand in my mongo setting. I'm using php driver to connect to a replicaset, giving to php master and slave host. all works fine when all servers are up, but if I shut down the slave host (not only the mongod, but all the host), then all connection to mongodb are slow (>3s), any idea why ?
[13:22:12] <F|shie> hello all im trying to install luamongo, but i get this error while running make. mongo_bsontypes.cpp: In function ‘int bson_type_ObjectID(lua_State*)’:mongo_bsontypes.cpp:76: error: ‘gen’ is not a member of ‘mongo::OID’ make: *** [mongo_bsontypes.o] Error 1. Got source from https://github.com/moai/luamongo
[13:32:45] <ninkotech> hi... how many concurrent clients can single mongodb handle well?
[13:45:03] <Nodex> ninkotech : connections are pooled by drivers
[14:42:38] <Derick> also, slaveOk does do nothing for writing/updating data, as that *has* to go to a primary
[14:42:54] <Derick> kali: well, it depends ... I've never seen it - but you do indeed need to take care of it.
[14:43:59] <bin> Derick well but i set writng concerns ..
[14:44:21] <kali> Derick: i've seen it... as many other subtle problem, it only manifests on loaded replicated system, like a production environment. never on development :)
[14:44:35] <Derick> bin: you can set write concerns, but that has nothing to do with slaveok/read preferences
[14:45:47] <Derick> bin: yeas, but that write will still fail then
[14:46:03] <bin> well yes but the current information from read will be up to date
[14:46:10] <bin> because no other write operation has been executed..
[14:46:56] <Derick> a write concern of w=3 will *not* guarantee that *other* connections will see the new data on all nodes at the same time - it only works for the connection/client that does the w=3 write btw
[14:49:24] <bin> sorry but can you explain it a little bit with an example ..
[14:51:09] <Derick> If script A has written two *two* nodes, a script B can read from the third node (either through slaveOk or directly) and still see outdated data
[14:51:14] <bin> well w=3 isnt it at least 2 secondaries to have the write operation replicated ?
[15:10:39] <tomasso> I need to do some operation every time a document is inserted into a collection. To do so, i wrote a ruby process that polls the mongo collection doing a find({}).count() , the thing is that polling mongo seems not to be good.. since mongo doesnt support triggers, what would be the best way to implement smth like that?
[15:12:28] <sflint_> tomasso : are you using a data access layer over top of mongo?
[15:12:38] <sflint_> or just doing straight writes to mongo?
[15:13:03] <tomasso> sflint_: using the ruby driver
[15:13:28] <tomasso> the writes also from another ruby process
[15:14:56] <sflint_> tomasso....i mean a little higher than the driver....another abstraction layer of code between your code and the database aka DAL(Data Access Layer)?
[15:15:23] <sflint_> if you were then you could implement a method that does this operation for you in that layer....kina like a trigger
[15:15:38] <sflint_> I am not a big fan of triggers in the database anyway...any database...
[15:15:58] <banzounet> When you do db.collection.remove() does mongo return the number of rows deleted?
[15:16:46] <Derick> tomasso: I would suggest doing something like is described here: http://derickrethans.nl/mongodb-and-solr.html — it's in PHP but you can do the same through ruby. You wouldn't need to actively poll then.
[15:17:41] <sflint_> Derick : do you know about "mongo-connecto" ?
[15:18:01] <banzounet> I'm trying to delete all my contacts before a certain date I tried this : db.contacts.remove({ createdAt: { $lt: { '$date': '2013-12-03T01:00:20.000Z' } } }) but it doesn't delete anything
[15:20:07] <sflint_> banzounet...for a lot of delete operations I use a .js file and verify what I am deleting first. And instead of deleting actually just move to seperate collecton (soft delete)
[15:20:14] <tomasso> i understand.. good idea.. the thing is that they both are different processes the one that writes and the one that consumes, the collection is a queue for a messaging system, that i dont want to collapse for receiving too many requests, i want the first process to iterate on the new elements, to sent the requests, so im able to control even delays if necessary
[15:49:37] <banzounet> When I tried to resotre the data from today I get Wed Dec 4 16:51:19.487 ERROR: ERROR: root directory must be a dump of a single database
[15:49:40] <banzounet> Wed Dec 4 16:51:19.487 ERROR: when specifying a db name with --db
[16:28:58] <patweb> i'm attempting to use a tailable cursor against a capped collection, but whenever the collection gets flooded the cursor disconnects from the collection. i've looked into doing batches, but it still ends up failing. is there something that i should do to ensure that the cursor stays alive?
[17:10:05] <fakewaffle> The system.indexes.json exported by Mongo don't include all the fields I have specified in Mongoose. It use to export all of them, but no longer does.
[17:10:42] <Nodex> fakewaffle : is there a question coming or just that statement?
[17:25:11] <morfin> i am not sure should i move to mopngodb with that structure
[17:25:17] <Nodex> but you've yet to say why you have a problem
[17:25:28] <Nodex> 17:07:57] <morfin> i need to store lots of rows with variable number of fields
[17:25:49] <Nodex> that doesn't make sense as a question
[17:26:47] <morfin> right now i am using postgresql but as i said there some problems(actually i think that's problems of any RDBMS) with storing different fields numbers so is that good idea to rewrite everything to MongoDB and use it as basckend for my app
[17:27:55] <Nodex> in your orginal example you wanted to store some contact details for a person e.g skype,phone,email etc etc correct?
[17:28:34] <Nodex> in mongo you woulld achieve this with a sub document ... contact : {email:...,phone:....,skype:.....}
[17:28:59] <Nodex> you can do the same thing in postgres afaik using the Json type
[17:29:01] <morfin> there can be lots of fields, some of them exists, some of them does not exist for clients also i'll need to upload from xlsx
[17:29:28] <Nodex> mongodb doens't require anything ... if Bob has skype then great else it doens't matter
[17:30:06] <Nodex> plus your upload method is part of your app not the database
[17:32:02] <du2x> Hello you guys. I have a question. I have an api that access mongodb and make stuff with it's data. Should I open a new connection for each request, or keep an opened connection forever?
[17:34:01] <morfin> basically i think i'll use Symfony 2 and Doctrine 2 with MongoDB driver and probably native app(for some critical operations)
[17:53:23] <Nodex> morfin : Personaly I would advise against bloated frameworks
[17:53:35] <Nodex> you've already mentioned you don't have system resources
[18:14:21] <azathoth99> but other 2 nodes are pissed and crash mongo
[18:15:33] <lazypower> When defining a model map in MongoDB, are collectins of nested bson documents represted as a list? I'm not finding this in the documentation and I know its there, i just dont see it :|
[18:20:26] <azathoth99> how do I print the size of the oplog?
[18:23:57] <du2x> Hello you guys. I have a question. I have an api that access mongodb and make stuff with it's data. Should I open a new connection for each request, or keep an opened connection forever?
[18:26:59] <cheeser> most drivers, iirc, maintain an internal connection pool in whatever their version of MongoClient is.
[19:57:50] <clever_> is update supposed to delete all fields i didnt supply it with?, its behaving more like save?
[19:58:52] <kali> clever_: there are two ways of using update: either provide a full document replacement (as you did) or use the $ modifiers ($set $addToSet, ...) to make changes in place
[20:00:30] <kali> clever_: this should be more explicit, or come early in the update page. i reported that as an issue, and it improved a bit, but not enough to my taste, so feel free to notify the author
[20:00:54] <kali> clever_: little buttons top right
[20:02:47] <clever_> i'm just used to the sql style, where update only sets the fields you give it
[20:03:25] <kali> clever: most people who read these pages are in the same position :)
[20:03:43] <clever> ive also had some minor problems with the array operators
[20:05:54] <kali> joannac: i think the first paragraph should mention the two kinds of updates. probaly as the second sentence, before starting the "multi" discussion. you want me to open a new issue ?
[20:05:58] <clever> i would not expect an array to ever match a single object
[20:06:17] <clever> or is mongo doing the unexepcted?
[20:06:48] <joannac> clever: or your expectations are wrong :p
[20:08:35] <ranman> clever: the best way I've found to think about arrays of embedded objects in mongodb are to imagine them being unrolled for the purposes of querying
[20:09:22] <clever> i'll have to keep that in mind for any future array work i do
[20:09:37] <joannac> kali: "Modifies an existing document or documents in a collection, by either updating specific fields in the existing document(s), or replacing the document(s) entirely."
[20:13:47] <kali> but the two behaviours of update are so different it is really confusing, close to an api misdesign, imho, so i would bend the principle and put this closer to the beginning
[20:14:14] <kali> but we can try your version and iterate if/when we see another beginner confused :)
[20:16:27] <joannac> kali: Modifies an existing document or documents in a collection. Can be used to either modify only specific fields, or to replace the document or documents entirely, depending on the parameters passed.
[20:16:43] <joannac> kali: you know, we should have this discussion in a docs ticket :p
[21:10:45] <rafaelhbarros> retran: I remember reading something like that a while back
[21:11:01] <retran> cool, just interested. i figure a map-reduce is executed server side
[21:11:09] <retran> but i wonder if same for ad-hoc javascript
[21:11:24] <ranman> retran: certain javascript is executed serverside but things you run in the shell are executed in the shell, stuff you pass in for where query or things like that will run on the server
[23:10:21] <retran> you know what i think? i think cursors are really slow
[23:33:00] <qswz> http://stackoverflow.com/questions/5719408/mongo-find-items-that-dont-have-a-certain-field actually which option is better?, 2nd answer look better to me
[23:35:36] <qswz> because I'd use a field "share" that when not existing to null, will be returned in searches
[23:40:30] <Joeskyyy1> As long as your share field will always have that schema you should be safe with the second
[23:40:47] <Joeskyyy1> cause $exists can't use indexes if you're trying to index to help speed up the query
[23:40:53] <Joeskyyy1> ^ correct me someone if I'm wrong.