PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 28th of January, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:22:19] <NaN> hi there
[00:26:19] <NaN> can I do a push to a value that doesn't exist jet? I mean, it is like upsert or something?
[00:34:09] <NaN> ...my fault, the value was and obj
[00:55:05] <george2> I currently have these functions http://paste.chakra-project.org/6322/ saved to my db.system.js. Can anyone help me figure out how I should run them over my dataset http://pastebin.kde.org/pc163fbcb/08004631/raw ?
[00:55:05] <george2> I want to collect all the posts matching userID "X" and put them through gyration() to get a radius of gyration measurement for each individual user. (pseudocode -> http://pastebin.kde.org/pa6a1e057 )
[00:55:05] <george2> The "frequency" mentioned in the JS is the number of data points for the user at location (lat, long)
[00:55:05] <george2> I'm unsure whether I need to do something with map/reduce (which I haven't used before), or if I can just use an aggregation pileline, or if I need to do something entirely different.
[00:55:05] <george2> If it matters, I will be running this over a few hundred million data points, with probably a hew hundred thousand unique userIDs.
[00:55:06] <george2> I don't want to spend hours on this only to realize I'm going about it completely wrong, so if somebody could point me in the right direction, I'd appreciate it. :)
[02:01:00] <Hyperking> How would i go about updating a key in my collection using the mongo shell?
[02:01:44] <Hyperking> I need to update posts with a pubdate: to published_date
[02:04:50] <steve1> Hyperking: try this: https://stackoverflow.com/questions/4146452/mongodb-what-is-the-fastest-way-to-update-all-records-in-a-collection
[02:08:42] <Hyperking> looking at the mongo docs. http://docs.mongodb.org/manual/reference/method/db.collection.update/
[02:09:13] <Hyperking> how would i write this using the update method?
[02:12:05] <Hyperking> db.posts.update( $type pubdate, $all , published_date, multi = true)
[02:13:33] <Hyperking> that obviously doesn't work. but i have no clue on how the syntax should be written
[02:15:28] <george2> > db.data_test.aggregate({$group: {_id: "$userID"}}).result.forEach(function(i){db.data_test.find({userID: i._id});})
[02:15:28] <george2> this returns nothing, what am I doing wrong?
[02:19:21] <Logicgate> hey guys
[02:19:31] <Logicgate> how would one select a random record efficiently?
[02:21:34] <jamiel> Hi all, db.serverStatus() is reporting 65 open connections. Could anyone recommend a way to debug where these are coming from? Currently there is little to no utilisation on my application, although I do have replication enabled to a single secondary node.
[02:21:40] <george2> Logicgate: http://stackoverflow.com/questions/2824157/random-record-from-mongodb
[02:33:27] <george2> if I do
[02:33:27] <george2> > a = []
[02:33:27] <george2> > db.data_test.aggregate({$group: {_id: "$userID"}}).result.forEach(function(i){ a.push(i._id) })
[02:33:27] <george2> a has the correct data. but that's going to be a lot of information to store in a temp variable
[02:42:46] <sheki> i am trying to insert data into a mongodb and Update the object using another connection in quick succession,
[02:42:56] <sheki> the update returns with "not-found" error
[02:43:05] <sheki> is there a simple explanation for this?
[02:57:30] <Hyperking> I have a key with a ISODate() value and need to update my collection to show a different key. How would you call a method new Date() within the update method?
[03:04:18] <NaN> is there a way to exclude sub-document values with a condition?
[03:04:50] <cheeser> you can compare docs to each other. but they have to be a total match
[03:08:42] <NaN> and the compare will give me the exluded values? I don't get it
[03:09:31] <cheeser> oh, you want to exclude from the results not exculde docs with those subdocs.
[03:09:57] <NaN> yes, from the result... from find()
[03:10:16] <cheeser> no, i don't think you can do that
[03:10:31] <NaN> then I need to 'recreate' the entire object
[03:11:17] <NaN> or at least, filter with some array functions to 'clean it'
[04:17:00] <Secretmapper> hi guys I have a data modeling question
[04:17:44] <Secretmapper> I have a user collection that has assets
[04:18:02] <Secretmapper> every assets is a new object that has many properties
[04:26:19] <cheeser> and your question is ... ?
[04:40:45] <Secretmapper> i was reluctant because after thinking about it maybe i'm the only one
[04:40:52] <Secretmapper> that can answer how I should model my data
[04:40:59] <Secretmapper> but i'll put it out there regardless
[04:41:08] <Secretmapper> should I embed or should I reference?
[04:41:48] <Secretmapper> user can have a LOT of assets (hundreds average)
[04:42:12] <Secretmapper> though embedding makes more sense because they're assets
[04:42:19] <Secretmapper> I'm wary that it might be too much
[04:51:05] <cheeser> reference
[04:51:33] <Secretmapper> thanks
[04:52:11] <Hyperking> Any way to grab data from one collection to create another? I want to generate a menu that takes posts with post_type: "page" and add them into a insert for a new collection.
[04:52:37] <cheeser> it helps to think of what you'd *normally* want to fetch when getting, e.g., that user
[04:53:14] <cheeser> if you normally wouldn't want all those assets every time you loaded a user, it's usually best to store them separately and reference the user from each asset
[09:19:00] <ollivera> Do I need any special configuration with GridFS?
[09:32:28] <ncls> ollivera: no, it should work out of the box
[09:36:22] <ollivera> ncls, thank you. Are you familiar with pymongo?
[09:39:31] <ollivera> ncls, I had problems with pymongo but I managed to insert a file with mongofiles
[09:50:02] <ncls> ollivera: no sorry, I don't use MongoDB with Python
[09:56:30] <remonvv> \o
[09:58:05] <NyB> How would I go about testing my Java BSONDecoder implementation? Are there any MongoDB database dumps somewhere that would exercise all corner cases?
[09:58:48] <ollivera> ncls, it is fine. mongofiles worked to me ...
[09:59:23] <ollivera> ncls, would it be possible to define paths? it seems that I always insert in the same location
[10:02:29] <ncls> ollivera: no, I don't think it's possible, it's not like a tree
[10:04:07] <ncls> you might put a file on the "name" field like { "name": "/my/file/path/name.ext" }
[10:04:27] <ncls> a path, sorry
[10:05:28] <ncls> it's not "name", it's "filename"
[10:06:03] <ncls> but all the records are stored at the same "root level"
[10:08:18] <ollivera> ncls, ok .. so I can use that "path" in the URL? Do you server the files with Apache or NGinx?
[10:18:35] <ncls> ollivera: yes, if you want, but remember that this is not an OS file system : there are no "read / write" permissions on it, you can only emulate the tree and the permissions with your application. You might want to store additional informations (owner, permissions, filetype, size, etc) in a "file_infos" Collection for example
[10:19:58] <ncls> No I didn't use Apache nor NGinx, but it doesn't matter : Apache will serve script files (php or python) that can handle the GridFS and render the file with the right "content type" I think
[10:26:44] <ollivera> ncls, what happens when we remove a file? Is it like GlusterFs? will it trigger a synchronization?
[10:27:40] <ncls> ollivera: I don't know because I never used MongoDB with synchronised databases on different servers, but I think that yes, it has the same behaviour as "normal" collections
[10:28:01] <ncls> you might want a more informed source confirmation though
[11:11:04] <arount> hi here (we are speaking english here right ?)
[11:11:34] <Derick> yes :-)
[11:11:43] <arount> Derick: ok :)
[11:12:44] <arount> I have little trouble with mongoexport command, maybe someone have the explanation to my problem
[11:14:15] <arount> I'm using mongoexport command, I have a pretty big collection (92263.063MB) indexed by _id and a field report_id
[11:15:29] <arount> I whould like to export all the collection sorted by report_id, so I use a command like: mongoexport -h 127.0.0.1:27017 -d database -c collection -q '{$query: {$orderby: {report_id:-1}}' -o dumps/2014-01-28/collection.json
[11:16:04] <arount> Ooops, bad copy / paste, the real command is: mongoexport -h 127.0.0.1:27017 -d database -c collection -q '{{$orderby: {report_id:-1}}' -o dumps/2014-01-28/collection.json
[11:17:17] <arount> but I get (got ? .. arf, french and english .. a big love story) just one line
[11:18:00] <Derick> what is the item that you get then?
[11:18:04] <Derick> (and it's "got")
[11:18:50] <arount> I tried to add $query: {} to my parameter -q, or $query: {report_id:{$ne:null}} but nothing .. if I delete -q parameter all my data are exported .. I don't really understand why
[11:19:48] <arount> Derick: Fuck ... I'm stupid ! haaa, I got: { "$err" : "too much data for sort() with no index. add an index or specify a smaller limit", "code" : 10128 }
[11:20:03] <arount> .. tsss I just have no indexes on local ..
[11:20:22] <arount> thanks, I'm a little stupid boy
[11:20:47] <Derick> hehe, it's ok :-)
[11:36:08] <arount> Derick: I added --forceTableScan (to avoid $snapshot error) and it's works ! perfect :)
[11:41:13] <Derick> i would have added an index...
[11:43:18] <arount> Derick: have an index on my field
[11:43:20] <DevRosemberg> does anyone in here know java drivers of mongo?
[11:44:09] <arount> Derick: maybe i do something wrong with my db, i check (you seem to say that forcetablescan is not necessary with indexes)
[11:44:12] <ron> DevRosemberg: just ask your question
[11:44:16] <Derick> it shouldn't be, no
[11:44:38] <DevRosemberg> http://pastie.org/8673538 in there, Achievements, Permissions and Purchased Items dont work
[11:44:38] <DevRosemberg> , they dont save
[11:46:24] <NyB> err... does MongoDB support UUID type 4 ?
[11:46:53] <NyB> the current stable Java driver seems to only support v3
[11:48:19] <arount> Derick: U'r right, just an error on my shell script who exec the mongoexport command, perfect, really thx
[12:07:46] <johnbenz> Hi, I need to move one of my mongo db to a new server. So I've decided to first configure the standalone mongo instance I have to a master instance. I went in the /etc/mongod.conf and uncommented master=true
[12:08:04] <johnbenz> I did a restart and then it was not working anymore
[12:08:25] <johnbenz> sudo service mongod status was ok
[12:08:49] <johnbenz> but I couldn't connect to the mongod instance anymore doing "mongo" from the command line
[12:08:52] <johnbenz> any idea??
[12:09:32] <johnbenz> FYI: this a production
[12:09:44] <johnbenz> anybody here??
[12:16:40] <johnbenz> somebody can help me, please??
[12:18:04] <byalaga> johnbenz: is the server which you are planning to move a replicaset member?
[12:18:14] <johnbenz> no
[12:18:23] <johnbenz> I want just to use master/slave
[12:18:49] <johnbenz> I have only a standalone server and I want to move it to a new server with the less downtime as possible
[12:19:38] <johnbenz> my idea is to first do master/slave and when everything is synchronized change the server ip in my code and set the slave =true as comment and restart
[12:19:57] <johnbenz> byalaga: does it seem the good way to do it?
[12:19:59] <byalaga> So, did you start the existing mongodb as a replicaset member? or a standalone one?
[12:20:08] <johnbenz> standalone for the existing
[12:20:41] <byalaga> afaik, you can't add a slave to it.. it has to be started as single replicaset memeber.
[12:21:03] <johnbenz> I'm trying to start it as a master, is that possible?
[12:21:23] <byalaga> yes! it is..
[12:21:25] <johnbenz> http://docs.mongodb.org/manual/core/master-slave/
[12:21:49] <johnbenz> I've read this, but because I'm starting the server with "sudo service mongod start"
[12:22:08] <johnbenz> I've decided to change the parameters in /etc/mongod.conf
[12:22:19] <byalaga> you should stop the current node and start it back as a single replicaset member.
[12:22:29] <byalaga> and then add a new memeber to it..
[12:23:28] <byalaga> 2nd method:
[12:23:28] <byalaga> take a hot backup of the existing one and setup a new node(this should be started as single replica member)
[12:23:28] <johnbenz> not sure I've understand. You confirm me that replicaset and master are two different things, right??
[12:24:08] <byalaga> nope: replicaset = master+slaves
[12:25:23] <johnbenz> Ok. So to start as a single replicaset member: I do
[12:25:29] <johnbenz> "sudo service mongod stop"
[12:25:42] <johnbenz> change the parameter in /etc/mongod.conf => http://docs.mongodb.org/manual/reference/configuration-options/#master
[12:25:45] <johnbenz> to true
[12:25:55] <johnbenz> and do "sudo service mongod start" right?
[12:26:02] <johnbenz> or am I forgetting something else??
[12:27:51] <byalaga> johnbenz: http://docs.mongodb.org/manual/tutorial/convert-standalone-to-replica-set/ follow this
[12:28:35] <johnbenz> byalaga: is this mandatory? mongod --port 27017 --dbpath /srv/mongodb/db0 --replSet rs0
[12:28:43] <johnbenz> I would prefer to do it via config
[12:29:37] <byalaga> then, iirc there's something you have to mention for replicaset in your config file.
[12:30:25] <johnbenz> because the daemon is runned by /sbin/service so I don't want to begin to tweak how mongo is runned
[12:30:50] <byalaga> something like replSet = rs0 in your config gile
[12:30:55] <byalaga> err, file
[12:31:21] <johnbenz> and how I know it will be the master?
[12:31:55] <byalaga> as you have only one mongodb in the replicaset.. it will be the master
[12:32:05] <byalaga> when you do rs.initiate()
[12:32:15] <byalaga> you can see... it will be turned into master
[12:33:06] <bzikarsky> Hm. Why is this collection not balances across both shards? http://pastie.org/private/rqcenolh5xlt2ztjzmekzw
[12:33:43] <johnbenz> ok and when I'll add a new server to the replica? Is there any risk that the master (formerly standalone) becomes slave?
[12:41:43] <RaviTezu> johnbenz: The node which you're going to add to existing one will be a slave.
[12:42:09] <johnbenz> RaviTezu: ok
[12:42:42] <RaviTezu> and it depends on many factors.. to decide which should be a primary(master) and which is secondary(slave)
[12:43:16] <johnbenz> RaviTezu: from what I'm reading here "master/slave" is different from "replicaset" http://docs.mongodb.org/manual/core/master-slave/?pageVersion=%5B%2721%27%5D
[12:43:49] <Derick> yes, they are quite different
[12:43:54] <Derick> master/slave does not do automatic failover
[12:43:59] <johnbenz> this is this kind of "master/slave" I want, I don't want to begin with replicaset as I just need to copy the database before moving it
[12:44:00] <Derick> replicasets do
[12:44:38] <johnbenz> Derick: It's just for some hours, or maybe less, I don't want failover as it's just copying from one db to another with less downtime as possible
[12:45:03] <Derick> I thought that for M/S you needed to shutdown the master first
[12:45:16] <Derick> for a replicaset, you do have to do that too though...
[12:45:17] <johnbenz> Derick: if I just set in /etc/mongod.conf => master = true it should do the trick for the master right?
[12:45:20] <Derick> (ignore that statement)
[12:45:30] <Derick> i think so, yes
[12:45:36] <Derick> I've never used it myself actually.
[12:45:51] <Derick> how much data is this?
[12:46:22] <johnbenz> and then in the slave I just need to set "slave = true " and source
[12:46:58] <johnbenz> half a Gigabyte
[12:47:39] <Derick> oh
[12:47:46] <Derick> copying that takes like 20 seconds, not?
[12:48:02] <Derick> I wouldn't even bother with m/s or replication in that case
[12:48:29] <Derick> lunch, bbl
[12:48:43] <johnbenz> Maybe but I don't want to take the risk, it's a website with a very high traffic
[12:48:56] <johnbenz> i cannot take a the chance it'll be down for too long
[12:51:33] <johnbenz> this should work without downtime? db.copyDatabase( "test", "records", "db0.example.net" )
[13:00:48] <DevRosemberg> http://pastie.org/8673538 in there, Achievements, Permissions and Purchased Items dont work
[13:00:49] <DevRosemberg> <DevRosemberg> , they dont save
[13:00:51] <DevRosemberg> Any ideas?
[13:22:30] <Mmike> Hello. I have 3 boxes in replica set, and I'd like to run repair against my datadirectory (as I need to reclaim disk space). Is this ok path to take: 1) shut down one secondary. 2) run mongod --repair, 3) fire it up. When it's in sync, do next secondary. 4) When that secondary is done too, step down primary, and do him.
[13:22:35] <Mmike> Is this ok?
[13:24:38] <DevRosemberg> http://pastie.org/8673538 in there, Achievements, Permissions and Purchased Items dont work
[13:24:39] <DevRosemberg> Any ideas?
[13:29:27] <Nodex> seems something is broken somewhere
[13:34:47] <DevRosemberg> Yes but what
[13:44:26] <joannac> DevRosemberg: What does "Don't work" mean?
[13:44:52] <DevRosemberg> They dont save
[13:46:05] <joannac> But punishments do?
[13:46:54] <DevRosemberg> no
[13:47:08] <DevRosemberg> Punishments dont, Achievements dont, Purchased Items dont
[13:47:39] <joannac> What does work then?
[13:47:50] <DevRosemberg> Permissions, Comets,
[13:47:52] <DevRosemberg> etc
[13:48:06] <DevRosemberg> Punishments, Achievements and Purchased Items are the only thing that dosent work
[13:48:34] <joannac> all three of those are sets
[13:48:46] <joannac> Are you not saving them properly?
[13:49:07] <DevRosemberg> idk, can you check?
[13:54:30] <DevRosemberg> jonnac, any ideas?
[13:54:36] <joannac> um, no. it's your code. I've given you a pointer on where to look.
[13:54:58] <joannac> I don't have the time to look through and understand your code.
[13:58:39] <cheeser> and by "doesn't work" you mean ... ?
[13:58:55] <Nodex> [13:44:48] <DevRosemberg> Punishments dont, Achievements dont, Purchased Items dont
[13:59:03] <Nodex> + [13:42:32] <DevRosemberg> They dont save
[13:59:22] <Nodex> throws*
[13:59:33] <cheeser> thanks. mine was strangely cloudy today. probably because of the cold.
[14:01:31] <Nodex> haha
[14:02:47] <remonvv> Wow.
[14:02:57] <DevRosemberg> cheeser any idea?
[14:03:09] <remonvv> Here's 600 lines of my code, please check it.
[14:03:39] <DevRosemberg> but there are around 30 causing the error
[14:03:42] <DevRosemberg> and its on the save method
[14:04:33] <remonvv> That's actually quite unlikely. The DBCollection.save() method certainly isn't broken.
[14:04:54] <DevRosemberg> i mean
[14:05:00] <DevRosemberg> the collection
[14:05:01] <remonvv> And I'm assuming that getPlayerCollection() method for example returns a DBCollection and not some sort of abstraction.
[14:05:05] <DevRosemberg> the Set's
[14:05:07] <DevRosemberg> are not calling
[14:05:08] <Nodex> does it give any error's?
[14:05:12] <DevRosemberg> no
[14:05:28] <remonvv> Make the smallest possible reproduction first.
[14:05:46] <DevRosemberg> ?
[14:06:20] <Nodex> this is why java is not enterprice ready, better to use basic imo
[14:06:25] <DevRosemberg> http://pastie.org/8675349
[14:06:29] <remonvv> True story.
[14:06:31] <DevRosemberg> thats the DatabaseManager
[14:06:51] <Nodex> remonvv : not "true story" - cool story
[14:07:04] <remonvv> Nothing you ever say is deserving of the "cool" label sir.
[14:07:42] <DevRosemberg> remonvv anything wrong in the DatabaseManager?
[14:07:59] <remonvv> DevRosemberg, specific to your problem?
[14:08:13] <Nodex> ice?
[14:08:16] <DevRosemberg> yes
[14:08:27] <remonvv> Because that try/catch around just the mongoClient initialization is doing my head in
[14:08:45] <remonvv> No, looks okay.
[14:08:47] <Nodex> haha, you sound like you're from the UK
[14:08:54] <Nodex> "doing my head in"
[14:09:14] <remonvv> I adapt.
[14:09:30] <Nodex> when in Rome
[14:10:19] <remonvv> DevRosemberg: There doesn't seem anything wrong with the code you're showing. Also, "unvanish" -> "appear"?
[14:11:36] <DevRosemberg> ya
[14:11:39] <DevRosemberg> nvm about that
[14:11:40] <remonvv> You can try the save directly rather than queue it for async execution
[14:11:48] <DevRosemberg> where?
[14:12:01] <DevRosemberg> and how
[14:13:12] <remonvv> http://pastie.org/8673538#130,131,132,133,134,135,136,137
[14:13:19] <remonvv> highlighted lines
[14:13:33] <cheeser> well that worked out well
[14:14:13] <DevRosemberg> what with that remonvv
[14:16:09] <remonvv> remove all the async stuff
[14:16:48] <remonvv> http://pastie.org/8675366
[14:17:26] <remonvv> afk
[14:17:30] <DevRosemberg> why woud that fix it?
[14:18:16] <remonvv> No idea. It shouldn't be broken but there's a lot of magic going on with "TheCosmosCore"..try it and see.
[14:18:31] <remonvv> The proper way is to reduce the problem to the smallest amount of code that still has the issue.
[14:18:33] <remonvv> But yeah, effort.
[14:18:37] <DevRosemberg> what?
[14:19:23] <Nodex> DevRosemberg : reduce your code until the problem goes away, then add it back till it breaks and there is your problem
[14:19:34] <Nodex> there is a pattern name for this, can't recall what it is
[14:21:03] <Nodex> continuous integration or something
[15:20:39] <Kaim> hi all
[15:20:48] <Kaim> I cannot go over 400update/sec in my mongodb instance
[15:21:07] <Kaim> I'm doing atomic update, not multiple
[15:21:21] <Kaim> with query on a uniq field with index
[15:21:30] <Kaim> how can I improve perf? write pb ?
[15:23:31] <mehola> hi guys is there a way to do case insensitive sorting in mongodb or is it still missing?
[15:25:36] <mehola> anybody?
[15:27:35] <Derick> it's not there yet
[15:28:22] <mehola> sigh...that's quite disappointing. I used mongo 2+ years ago, I was hoping it would have been implemented by now...
[15:29:06] <mehola> thanks Derick
[15:29:16] <Kaim> anybody for my question? :(
[15:29:29] <mehola> kaim what's your question?
[15:30:28] <Kaim> I cannot go over 400update/sec in my mongodb instance
[15:30:33] <Kaim> I'm doing atomic update, not multiple
[15:30:37] <Kaim> with query on a uniq field with index
[15:30:41] <Kaim> how can I improve perf? write pb ?
[15:34:03] <cheeser> mehola: did you file a ticket on that?
[15:37:40] <mehola> cheeser: you mean on the case insensitive sorting? It was a known issue then, so no I didn't file a ticket on it
[15:39:02] <mehola> their argument was that casing is hard with UTF8...too many characters...
[15:39:59] <joannac> mehola: why don't you store your data in lower-case?
[15:40:59] <cheeser> yeah. when i've needed case-insensitive searches i just double store with one normalized to either upper or lower
[15:42:05] <mehola> cheeser: that's fine but when you think about the fact that you to store a lower cased version of any string field you may want to sort by....just seems dirty to me
[15:42:46] <cheeser> on the other hand, i solved it and got on with life.
[15:42:53] <mehola> :P
[15:43:31] <mehola> cheeser: I'm not refuting the work-around...just saying...
[15:52:57] <Kaim> mehola, now idea?
[15:53:42] <mehola> Kaim: The only thing that I can think of is lazy writes
[15:54:02] <Kaim> yes...
[15:54:23] <mehola> Kaim: well are you doing them?
[15:54:23] <Kaim> how can I counter that
[15:54:34] <shadfc> Kaim: what type of disk are you writing to?
[15:54:48] <Kaim> I write in /dev/shm
[15:58:56] <mehola> Kaim: you may have looked at this already but I think the journaling part may be of interest to you http://docs.mongodb.org/manual/core/write-performance/
[16:01:24] <Kaim> I may try to disable it
[16:01:46] <joannac> Erm...
[16:03:10] <Kaim> hum?
[16:06:24] <joannac> Please don't disable journalling. It's a false economy. You'll get fast writes until something goes wrong, and then you'll have corrupt data
[16:06:56] <mehola> joannac: I wasn't telling him to disable it
[17:00:32] <DevRosemberg> Does anyone know what cn be causing parts of my BasicDb Object to not save
[17:01:40] <cheeser> mongo will save what you send it...
[17:03:18] <DevRosemberg> cheeser
[17:03:26] <DevRosemberg> mind taking a look at it please?
[17:03:49] <DevRosemberg> http://gyazo.com/56d0f2ac3a2da355b93e79981bc6c4b7
[17:03:53] <DevRosemberg> it shouldntr look like that
[17:05:08] <DevRosemberg> it should store the Punishments and stuff IN the DBObject
[17:05:38] <cheeser> that's just a wall of text...
[17:05:51] <cheeser> have you condensed things down ot a repeatable test?
[17:06:28] <DevRosemberg> i mean
[17:06:52] <DevRosemberg> its not saving the punishments, achievements and purchased items
[17:10:40] <cheeser> i'm all but certain it's a bug in your code. but without a test case to isolate, i really can't tell either way.
[17:27:42] <yawniek> hi! when i do a aggregation with $addToSet, is there an easy way to not get the whole set but just the length of that set in the result?
[17:28:13] <cheeser> $size ?
[17:29:08] <DevRosemberg> cheese
[17:29:10] <DevRosemberg> but i mean
[17:29:18] <DevRosemberg> only the lists are not saving
[17:29:20] <DevRosemberg> which is odd
[17:29:56] <cheeser> how's that test case coming?
[17:33:31] <NyB> hmmm, how would I go about creating a code_w_s (JS code with scope) BSON element in mongodb? Code without scope are simple functions, but what is the scoped equivalent?
[17:33:59] <NyB> I code_w_s are the last thing that I have to test in my BSON decode implementation :-)
[17:34:33] <DevRosemberg> what test case?
[17:35:27] <Nodex> [17:08:20] <cheeser> i'm all but certain it's a bug in your code. but without a test case to isolate, i really can't tell either way.
[17:35:29] <Nodex> that test case
[17:40:18] <DevRosemberg> what is a test case?
[17:45:35] <Nodex> a case where you give MINIMAL code to see if your program works or not
[17:49:25] <Nodex> Not quite sure how a programmer does not know what a test case is
[17:58:51] <NyB> Where are code elements with scope used in MongoDB? I am trying to figure out how to have one returned within a BSON document so that I can test my decoder implementation, but I have yet to find how to create one via the mongodb shell...
[18:14:21] <Darni> hi guys... is this the right place where to ask a pymongo (mongodb python API) question?
[18:14:43] <cheeser> sure.
[18:15:42] <Darni> I'm writing a Flask based server which accesses a mongo db... from what I see at http://api.mongodb.org/python/current/api/pymongo/mongo_client.html and http://api.mongodb.org/python/current/faq.html#how-does-connection-pooling-work-in-pymongo there's some connection pooling built-in
[18:16:44] <Darni> and I understnad that if I make a global "client = MongoClient(...)", I can use it in a request and I'm not risking any race condition...
[18:16:47] <Darni> my question is...
[18:16:49] <cheeser> i believe that's correct
[18:17:34] <Darni> if I make a global "collection = MongoClient()['some_collection_name']", will I still get connection pooling correctly?
[18:18:27] <Darni> (my app uses a single collection, and integrates a lot of legacy code which uses that global as an API, that's the source of my question)
[18:26:43] <Darni> sorry, I messed up the example, it was "db = MongoClient()['some_db_name']"
[18:27:07] <Darni> (I meant "db" everywhere I said collection :) )
[18:27:37] <cheeser> i want to say "yes," but i'm not 100%.
[18:27:43] <cheeser> using the java driver, that'd be true.
[18:28:32] <Darni> ok, that's something... I'll try to figure out the source. Thanks cheeser
[19:08:03] <NaN> for an action-log collection on users (db.history) do you guys recommend me to use 1 document per user and append the actions to a log: key
[19:08:39] <NaN> or create a new document per action and use a the users _id as "fk"
[19:36:39] <DevRosemberg> Guys
[19:38:26] <Desert_> DevRosemberg: rather lonely
[19:40:53] <cheeser> NaN: one entry per history item
[19:41:38] <cheeser> otherwise your documents would grow too large eventually. and with separate docs you could purge old ones via TTL indexing or move them off into an archival collecition
[20:09:49] <NaN> cheeser: good one, thanks :)
[20:12:26] <cheeser> np
[20:20:49] <swak> So say if I'm doing a db where it contains profile info on a user that they can keep adding to. Should I make a new page for each addition?
[20:21:37] <cheeser> a new page?
[20:21:45] <swak> yeah
[20:21:52] <swak> db-collection-page-field
[20:22:07] <swak> Two ways I could go about this
[20:22:50] <swak> one is use an array in the page for each addition or to create a new page for each addition using ids from a central location.
[20:24:16] <swak> I'm just wondering how much I should break things apart.
[20:37:11] <swak> So say if I'm doing a user profile in which the user can keep adding info. I'm planning on separating different categories into different collections. However, should I store everything for a category for a user into one page using an array or should I store an id in an array pointing to each addition?
[20:41:57] <NaN> tried to import with mongoimport but it tells me "exception:BSON representation of supplied JSON is too large"
[20:42:01] <NaN> my json is valid
[20:42:03] <NaN> any suggestions?
[20:42:49] <swak> Did you use something to verify the formatting?
[20:43:55] <swak> NaN: http://stackoverflow.com/questions/19441228/insert-json-file-into-mongodb
[20:44:01] <swak> maybe try this out
[20:44:13] <NyB> NaN: it could also be that you are trying to upload a document larger than the maximum size allowed by MongoDB. Did the document come from MongoDB or from some other source?
[20:44:46] <NaN> NyB: hand made (but valid)
[20:44:53] <NaN> swak: I'll read it
[20:47:31] <NaN> so it must be 1 document per line
[20:47:34] <NaN> I see..
[20:58:37] <slyth> hi
[20:58:48] <slyth> i have a problem with mongodb c# driver
[20:59:37] <slyth> MongoCursor<BsonDocument> convert to List<BsonDocument> is very slow
[21:01:32] <slyth> is nobody help me ?
[21:10:25] <kali> slyth: cursor are lazy. so it's not the conversion but the actual database access that you pay when you convert to a list
[21:10:37] <cheeser> i'm not a c# guy (though I can mostly read it) but it's virtually impossible to diagnose speed issues without any details. show your code?
[21:10:48] <cheeser> might be an indexing issue, e.g.
[21:58:45] <h4k1m> hi guys
[21:59:04] <h4k1m> Im a noob trying to learn to use mongodb & I have a simple question
[21:59:38] <h4k1m> suppose that I have a simple web app with user connecting & publishing posts
[22:00:19] <h4k1m> isnt it redondant to store users & posts in the same collections, with user credentials stored in each post
[22:00:48] <h4k1m> how can I model this app?
[22:12:58] <bloudermilk> Just wrote a mapReduce to calculate popularity scores for one of my collections (groups) based on another collection (messages). The reduce is keyed on the group's _id and writes to a "ranked_groups" collection. What is the idiomatic way to query "groups" based on the score field in "ranked_groups"?
[23:28:30] <brendan6> Does anyone know if there is a way to create an index so that that the query db.collection.find({'foo.0.bar': 1}) uses an index for a document {foo: [{bar: 1},{bar: 2}]}?
[23:29:09] <brendan6> db.collection.ensureIndex({'foo.bar': 1}) only uses the index with a find like db.collection.find({'foo.bar': 1})
[23:34:03] <thesheff17> can you do an update on a collection with $gte and $lte for a range of integers?
[23:34:10] <thesheff17> or do I have to do a find and then loop through them?
[23:37:22] <pgora2013> thesheff17: not sure if this is what you are looking for : {multi: true} as the third parameter
[23:39:33] <thesheff17> pgora2013: this is basically what I have in python http://pastebin.com/wax2VJJy